HPE Backup Recovery and Archive (BURA) Solutions design guide Formerly EBS design guide
Technical white paper
Technical white paper
Contents Overview 4
HPE Data Agile Partner Program 4
BURA supported components 4
Supported topologies 5
Point-to-point 6
Switched fabric 6
Installation checklist 6
HPE StoreOnce Catalyst 7
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries 8
Basic failover 8
Advanced failover 8
LTO-7 failover 8
Prerequisites for using basic data and control path failover 8
Prerequisites for using LTO-7 failover or advanced data and control path failover 9
Native backup commands 10
Linear Tape File System 10
Software utilities that may disrupt solution connectivity 10
FCFCoE switch zoning recommendations 11
Configuration and operating system details 13
Windows Server 13
Windows Server best practices 16
Red Hat and SUSE Linux Server 17
Red Hat and SUSE Linux Server best practices 21
HPE-UX Server 24
HPE-UX Server best practices 28
Oracle Solaris Server 29
Oracle Solaris Server best practices 30
Technical white paper
IBM AIX Server 31
IBM AIX Server best practices 32
Persistent binding 32
Virtual machine support 33
VMware Server 34
HPE Integrity Virtual Machines 35
Microsoft Hyper-V 35
Installing backup software and patches 36
Technical white paper Page 4
This guide describes how to prepare Windowsreg Linuxreg HPE-UX Solaris AIX and virtual machine (VM) hosts for connecting to HPE StoreEver Tape HPE StoreOnce and HPE StoreAll disk-based virtual tape backup solutions in Fibre Channel (FC) storage area networks (SAN) and network attached storage (NAS) environments
Overview Hewlett Packard Enterprise Backup Recovery and Archive (BURA) Solutions are an integration of data protection and archiving software with industry-standard hardware providing a complete enterprise class solution Leveraging the history of our extensive partnerships with leading software companies Hewlett Packard Enterprise continues to provide software solutions that support the backup and restore processes of homogeneous and heterogeneous operating systems in a shared storage environment
Data protection and archiving software focuses on using an automated Linear Tape-Open (LTO) Ultrium tape library andor disk-based virtual tape backup solutions BURA Solutions combine the functionality and management of SANs data protection and archiving software and scaling tools to integrate tape and disk storage subsystems in the same SAN environment Enterprise data protection can be accomplished with different target devices in various configurations using a variety of transport methods such as the corporate communication network a server SCSISAS SCSI Fibre Channel over Ethernet (FCoE) or a FC infrastructure BURA Solutions typically use a SAN that provides dedicated bandwidth independent of the LAN This independence allows single or multiple backup or restore jobs to run without the network traffic caused by data protection environments Management of the data protection and archiving software occurs over the LAN while the data is sent over the SAN This achieves faster data transfer speeds and reduces Ethernet traffic Jobs and devices can be managed and viewed from either the primary server or any server or client connected within BURA Solutions that have supported data protection and archiving software installed All servers within the BURA Solutions server group can display the same devices
HPE Data Agile Partner Program Hewlett Packard Enterprise is dedicated to providing a rich portfolio of Backup Recovery and Archiving (BURA) Solutions for our customers
The HPE Data Agile Partner Program offers partners a programmatic framework to self-certify the interoperability of their applications across the entire HPE Storage portfolio of BURA productsmdashincluding HPE StoreOnce Backup HPE StoreAll Storage and HPE StoreEver Tape
The Data Agile Partner Program enables partners to learn about the HPE BURA portfolio test and certify their applications in a dedicated Hewlett Packard Enterprise lab environment and take advantage of unique marketing opportunities Program members also have access to specialized trainings and technical assistance
Provide powerful solutions to your customers and expand market opportunities through a partnership with HPE Storage Learn more at hpecomstorageDataAgile
BURA supported components Whether yoursquore looking to scale from entry-level workgroups to enterprise-level data centers the HPE Data Agile BURA Compatibility Matrix has the information you need to design data protection solutions with HPE StoreOnce Backup HPE StoreEver Tape and HPE StoreAll Storage Refer to table 1 for white papers and design guides documenting fully certified data protection and archive solutions built with HPE storage products and market-leading Independent Software Vendors (ISV) applications Learn more at hpecomstorageBURACompatibility
Technical white paper Page 5
Table 1 HPE Data Agile BURA Solution white papers
Cross platform design guides Design Guide for Backup and Archive
Example Configuration Guide for Backup and Archive
Tiered Data Retention for HPE Storage
White papersmdashProduct HPE StoreOnce Backup
HPE StoreAll Storage
HPE StoreEver Tape
HPE StoreServ 3PAR File Persona
White papersmdashData Protection and Archive Vendors AGFA Healthcare IMPAX
Citrixreg ShareFile
CommVault Simpana
GE Centricity
Genetec Security Center
HPE Consolidated Archive
HPE Control Point
HPE Data Protector
EMC Networker
IBM TSM
iTernity iCAS
McAfeereg VirusScan
Milestone XProtect
QStar Archive Manager
Veritas Enterprise Vault
Veritas NetBackup
Veritas Backup Exec
Symantec Protection Engine
Veeam Software
White papersmdashDatabases and Virtual Machines Microsoftreg Exchange
Microsoft Hyper-V
Microsoft SQL
Oracle
SAP HANAreg
VMwarereg
Supported topologies The following topologies are supported in a FC SAN with short-wave SFPs being the only FC connection supported in HPE StoreEver and HPE StoreOnce devices Any requirements for an extended SAN would require a SAN switch or router to which the StoreEver or StoreOnce devices can attach Refer to the extended SAN configuration in the HPE Backup and Archive Example Configuration Guide for more details
Technical white paper Page 6
Point-to-point Point-to-point or Direct-Attach Fibre (DAF) connections are direct FC connections made between two nodes such as a server and an attached tape library This configuration requires no switch to implement and the default for a DAF link is a private loop1 The storage devices are dedicated to a server in a point-to-point configuration
Switched fabric A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches In the FC switched fabric (FC-SW) topology devices are connected to each other through one or more FC switches Visibility among devices in a fabric is typically controlled with zoning FCoE is a computer network technology that encapsulates FC frames over Ethernet networks This allows FC to use 10GbE networks (or higher speeds) while preserving the FC protocol FCoE maps FC directly over Ethernet while being independent of the Ethernet forwarding scheme HPE Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric For standalone servers converged network adapters (CNA) are used to connect to the fabric through FCoE fabric switches A CNA also called a converged network interface controller (C-NIC) is a computer inputoutput device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC) In other words it ldquoconvergesrdquo access to respectively a SAN and a general-purpose computer network
Installation checklist Prior to installing data protection and archiving software review the questions below to ensure that all components are configured properly and logged into the SAN
bull For any Windows 2008 servers has the Windows feature Removable Storage Manager been removed or disabled2
bull Are all of the following hardware components at the minimum supported firmware revisions specified in the HPE Data Agile BURA Compatibility Matrix servers HBAs FC andor FCoE switches tape drives library robots and disk-based virtual tape systems
bull Are all recommended operating system patches service packs updates Service Pack for ProLiant Quality Packs (QPK) or Hardware Enablement (HWE) bundles specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull Is the minimum supported HBA driver specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull If you are using multi-path configurations with your HPE StoreEver Tape Libraries have you reviewed the HPE StoreEver Failover overview Including advanced path failovermdashTechnical white paper
bull Are the StoreEver Tape Library StoreOnce Backup System andor StoreAll Storage System online
bull If multiple FC switches are cascaded or meshed are all inter-switch link (ISL) ports correctly logged in
bull Are all of the host HBAs correctly logged into the FC andor FCoE switch
bull Are all tape library robotic devices and disk-based virtual tape systems zoned configured and presented to each host from the FC andor FCoE switch
bull If using zoning on the FC andor FCoE switch has the zone been added to the active switch configuration
bull Do you have the latest supported version of HPE Command View for Tape Libraries Software installed to manage monitor and configure all of your HPE StoreEver Tape Libraries
1 16 GB FC HBAs do not currently support private loop Brocade FC HBAs only recently began supporting private loop 2 The Windows Removable Storage Manager (RSM) service will claim tape drives and disrupt any installed backup applications Removable Storage Manager is no longer available as
of Windows 7 and Windows 2008 R2
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper
Contents Overview 4
HPE Data Agile Partner Program 4
BURA supported components 4
Supported topologies 5
Point-to-point 6
Switched fabric 6
Installation checklist 6
HPE StoreOnce Catalyst 7
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries 8
Basic failover 8
Advanced failover 8
LTO-7 failover 8
Prerequisites for using basic data and control path failover 8
Prerequisites for using LTO-7 failover or advanced data and control path failover 9
Native backup commands 10
Linear Tape File System 10
Software utilities that may disrupt solution connectivity 10
FCFCoE switch zoning recommendations 11
Configuration and operating system details 13
Windows Server 13
Windows Server best practices 16
Red Hat and SUSE Linux Server 17
Red Hat and SUSE Linux Server best practices 21
HPE-UX Server 24
HPE-UX Server best practices 28
Oracle Solaris Server 29
Oracle Solaris Server best practices 30
Technical white paper
IBM AIX Server 31
IBM AIX Server best practices 32
Persistent binding 32
Virtual machine support 33
VMware Server 34
HPE Integrity Virtual Machines 35
Microsoft Hyper-V 35
Installing backup software and patches 36
Technical white paper Page 4
This guide describes how to prepare Windowsreg Linuxreg HPE-UX Solaris AIX and virtual machine (VM) hosts for connecting to HPE StoreEver Tape HPE StoreOnce and HPE StoreAll disk-based virtual tape backup solutions in Fibre Channel (FC) storage area networks (SAN) and network attached storage (NAS) environments
Overview Hewlett Packard Enterprise Backup Recovery and Archive (BURA) Solutions are an integration of data protection and archiving software with industry-standard hardware providing a complete enterprise class solution Leveraging the history of our extensive partnerships with leading software companies Hewlett Packard Enterprise continues to provide software solutions that support the backup and restore processes of homogeneous and heterogeneous operating systems in a shared storage environment
Data protection and archiving software focuses on using an automated Linear Tape-Open (LTO) Ultrium tape library andor disk-based virtual tape backup solutions BURA Solutions combine the functionality and management of SANs data protection and archiving software and scaling tools to integrate tape and disk storage subsystems in the same SAN environment Enterprise data protection can be accomplished with different target devices in various configurations using a variety of transport methods such as the corporate communication network a server SCSISAS SCSI Fibre Channel over Ethernet (FCoE) or a FC infrastructure BURA Solutions typically use a SAN that provides dedicated bandwidth independent of the LAN This independence allows single or multiple backup or restore jobs to run without the network traffic caused by data protection environments Management of the data protection and archiving software occurs over the LAN while the data is sent over the SAN This achieves faster data transfer speeds and reduces Ethernet traffic Jobs and devices can be managed and viewed from either the primary server or any server or client connected within BURA Solutions that have supported data protection and archiving software installed All servers within the BURA Solutions server group can display the same devices
HPE Data Agile Partner Program Hewlett Packard Enterprise is dedicated to providing a rich portfolio of Backup Recovery and Archiving (BURA) Solutions for our customers
The HPE Data Agile Partner Program offers partners a programmatic framework to self-certify the interoperability of their applications across the entire HPE Storage portfolio of BURA productsmdashincluding HPE StoreOnce Backup HPE StoreAll Storage and HPE StoreEver Tape
The Data Agile Partner Program enables partners to learn about the HPE BURA portfolio test and certify their applications in a dedicated Hewlett Packard Enterprise lab environment and take advantage of unique marketing opportunities Program members also have access to specialized trainings and technical assistance
Provide powerful solutions to your customers and expand market opportunities through a partnership with HPE Storage Learn more at hpecomstorageDataAgile
BURA supported components Whether yoursquore looking to scale from entry-level workgroups to enterprise-level data centers the HPE Data Agile BURA Compatibility Matrix has the information you need to design data protection solutions with HPE StoreOnce Backup HPE StoreEver Tape and HPE StoreAll Storage Refer to table 1 for white papers and design guides documenting fully certified data protection and archive solutions built with HPE storage products and market-leading Independent Software Vendors (ISV) applications Learn more at hpecomstorageBURACompatibility
Technical white paper Page 5
Table 1 HPE Data Agile BURA Solution white papers
Cross platform design guides Design Guide for Backup and Archive
Example Configuration Guide for Backup and Archive
Tiered Data Retention for HPE Storage
White papersmdashProduct HPE StoreOnce Backup
HPE StoreAll Storage
HPE StoreEver Tape
HPE StoreServ 3PAR File Persona
White papersmdashData Protection and Archive Vendors AGFA Healthcare IMPAX
Citrixreg ShareFile
CommVault Simpana
GE Centricity
Genetec Security Center
HPE Consolidated Archive
HPE Control Point
HPE Data Protector
EMC Networker
IBM TSM
iTernity iCAS
McAfeereg VirusScan
Milestone XProtect
QStar Archive Manager
Veritas Enterprise Vault
Veritas NetBackup
Veritas Backup Exec
Symantec Protection Engine
Veeam Software
White papersmdashDatabases and Virtual Machines Microsoftreg Exchange
Microsoft Hyper-V
Microsoft SQL
Oracle
SAP HANAreg
VMwarereg
Supported topologies The following topologies are supported in a FC SAN with short-wave SFPs being the only FC connection supported in HPE StoreEver and HPE StoreOnce devices Any requirements for an extended SAN would require a SAN switch or router to which the StoreEver or StoreOnce devices can attach Refer to the extended SAN configuration in the HPE Backup and Archive Example Configuration Guide for more details
Technical white paper Page 6
Point-to-point Point-to-point or Direct-Attach Fibre (DAF) connections are direct FC connections made between two nodes such as a server and an attached tape library This configuration requires no switch to implement and the default for a DAF link is a private loop1 The storage devices are dedicated to a server in a point-to-point configuration
Switched fabric A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches In the FC switched fabric (FC-SW) topology devices are connected to each other through one or more FC switches Visibility among devices in a fabric is typically controlled with zoning FCoE is a computer network technology that encapsulates FC frames over Ethernet networks This allows FC to use 10GbE networks (or higher speeds) while preserving the FC protocol FCoE maps FC directly over Ethernet while being independent of the Ethernet forwarding scheme HPE Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric For standalone servers converged network adapters (CNA) are used to connect to the fabric through FCoE fabric switches A CNA also called a converged network interface controller (C-NIC) is a computer inputoutput device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC) In other words it ldquoconvergesrdquo access to respectively a SAN and a general-purpose computer network
Installation checklist Prior to installing data protection and archiving software review the questions below to ensure that all components are configured properly and logged into the SAN
bull For any Windows 2008 servers has the Windows feature Removable Storage Manager been removed or disabled2
bull Are all of the following hardware components at the minimum supported firmware revisions specified in the HPE Data Agile BURA Compatibility Matrix servers HBAs FC andor FCoE switches tape drives library robots and disk-based virtual tape systems
bull Are all recommended operating system patches service packs updates Service Pack for ProLiant Quality Packs (QPK) or Hardware Enablement (HWE) bundles specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull Is the minimum supported HBA driver specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull If you are using multi-path configurations with your HPE StoreEver Tape Libraries have you reviewed the HPE StoreEver Failover overview Including advanced path failovermdashTechnical white paper
bull Are the StoreEver Tape Library StoreOnce Backup System andor StoreAll Storage System online
bull If multiple FC switches are cascaded or meshed are all inter-switch link (ISL) ports correctly logged in
bull Are all of the host HBAs correctly logged into the FC andor FCoE switch
bull Are all tape library robotic devices and disk-based virtual tape systems zoned configured and presented to each host from the FC andor FCoE switch
bull If using zoning on the FC andor FCoE switch has the zone been added to the active switch configuration
bull Do you have the latest supported version of HPE Command View for Tape Libraries Software installed to manage monitor and configure all of your HPE StoreEver Tape Libraries
1 16 GB FC HBAs do not currently support private loop Brocade FC HBAs only recently began supporting private loop 2 The Windows Removable Storage Manager (RSM) service will claim tape drives and disrupt any installed backup applications Removable Storage Manager is no longer available as
of Windows 7 and Windows 2008 R2
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper
IBM AIX Server 31
IBM AIX Server best practices 32
Persistent binding 32
Virtual machine support 33
VMware Server 34
HPE Integrity Virtual Machines 35
Microsoft Hyper-V 35
Installing backup software and patches 36
Technical white paper Page 4
This guide describes how to prepare Windowsreg Linuxreg HPE-UX Solaris AIX and virtual machine (VM) hosts for connecting to HPE StoreEver Tape HPE StoreOnce and HPE StoreAll disk-based virtual tape backup solutions in Fibre Channel (FC) storage area networks (SAN) and network attached storage (NAS) environments
Overview Hewlett Packard Enterprise Backup Recovery and Archive (BURA) Solutions are an integration of data protection and archiving software with industry-standard hardware providing a complete enterprise class solution Leveraging the history of our extensive partnerships with leading software companies Hewlett Packard Enterprise continues to provide software solutions that support the backup and restore processes of homogeneous and heterogeneous operating systems in a shared storage environment
Data protection and archiving software focuses on using an automated Linear Tape-Open (LTO) Ultrium tape library andor disk-based virtual tape backup solutions BURA Solutions combine the functionality and management of SANs data protection and archiving software and scaling tools to integrate tape and disk storage subsystems in the same SAN environment Enterprise data protection can be accomplished with different target devices in various configurations using a variety of transport methods such as the corporate communication network a server SCSISAS SCSI Fibre Channel over Ethernet (FCoE) or a FC infrastructure BURA Solutions typically use a SAN that provides dedicated bandwidth independent of the LAN This independence allows single or multiple backup or restore jobs to run without the network traffic caused by data protection environments Management of the data protection and archiving software occurs over the LAN while the data is sent over the SAN This achieves faster data transfer speeds and reduces Ethernet traffic Jobs and devices can be managed and viewed from either the primary server or any server or client connected within BURA Solutions that have supported data protection and archiving software installed All servers within the BURA Solutions server group can display the same devices
HPE Data Agile Partner Program Hewlett Packard Enterprise is dedicated to providing a rich portfolio of Backup Recovery and Archiving (BURA) Solutions for our customers
The HPE Data Agile Partner Program offers partners a programmatic framework to self-certify the interoperability of their applications across the entire HPE Storage portfolio of BURA productsmdashincluding HPE StoreOnce Backup HPE StoreAll Storage and HPE StoreEver Tape
The Data Agile Partner Program enables partners to learn about the HPE BURA portfolio test and certify their applications in a dedicated Hewlett Packard Enterprise lab environment and take advantage of unique marketing opportunities Program members also have access to specialized trainings and technical assistance
Provide powerful solutions to your customers and expand market opportunities through a partnership with HPE Storage Learn more at hpecomstorageDataAgile
BURA supported components Whether yoursquore looking to scale from entry-level workgroups to enterprise-level data centers the HPE Data Agile BURA Compatibility Matrix has the information you need to design data protection solutions with HPE StoreOnce Backup HPE StoreEver Tape and HPE StoreAll Storage Refer to table 1 for white papers and design guides documenting fully certified data protection and archive solutions built with HPE storage products and market-leading Independent Software Vendors (ISV) applications Learn more at hpecomstorageBURACompatibility
Technical white paper Page 5
Table 1 HPE Data Agile BURA Solution white papers
Cross platform design guides Design Guide for Backup and Archive
Example Configuration Guide for Backup and Archive
Tiered Data Retention for HPE Storage
White papersmdashProduct HPE StoreOnce Backup
HPE StoreAll Storage
HPE StoreEver Tape
HPE StoreServ 3PAR File Persona
White papersmdashData Protection and Archive Vendors AGFA Healthcare IMPAX
Citrixreg ShareFile
CommVault Simpana
GE Centricity
Genetec Security Center
HPE Consolidated Archive
HPE Control Point
HPE Data Protector
EMC Networker
IBM TSM
iTernity iCAS
McAfeereg VirusScan
Milestone XProtect
QStar Archive Manager
Veritas Enterprise Vault
Veritas NetBackup
Veritas Backup Exec
Symantec Protection Engine
Veeam Software
White papersmdashDatabases and Virtual Machines Microsoftreg Exchange
Microsoft Hyper-V
Microsoft SQL
Oracle
SAP HANAreg
VMwarereg
Supported topologies The following topologies are supported in a FC SAN with short-wave SFPs being the only FC connection supported in HPE StoreEver and HPE StoreOnce devices Any requirements for an extended SAN would require a SAN switch or router to which the StoreEver or StoreOnce devices can attach Refer to the extended SAN configuration in the HPE Backup and Archive Example Configuration Guide for more details
Technical white paper Page 6
Point-to-point Point-to-point or Direct-Attach Fibre (DAF) connections are direct FC connections made between two nodes such as a server and an attached tape library This configuration requires no switch to implement and the default for a DAF link is a private loop1 The storage devices are dedicated to a server in a point-to-point configuration
Switched fabric A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches In the FC switched fabric (FC-SW) topology devices are connected to each other through one or more FC switches Visibility among devices in a fabric is typically controlled with zoning FCoE is a computer network technology that encapsulates FC frames over Ethernet networks This allows FC to use 10GbE networks (or higher speeds) while preserving the FC protocol FCoE maps FC directly over Ethernet while being independent of the Ethernet forwarding scheme HPE Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric For standalone servers converged network adapters (CNA) are used to connect to the fabric through FCoE fabric switches A CNA also called a converged network interface controller (C-NIC) is a computer inputoutput device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC) In other words it ldquoconvergesrdquo access to respectively a SAN and a general-purpose computer network
Installation checklist Prior to installing data protection and archiving software review the questions below to ensure that all components are configured properly and logged into the SAN
bull For any Windows 2008 servers has the Windows feature Removable Storage Manager been removed or disabled2
bull Are all of the following hardware components at the minimum supported firmware revisions specified in the HPE Data Agile BURA Compatibility Matrix servers HBAs FC andor FCoE switches tape drives library robots and disk-based virtual tape systems
bull Are all recommended operating system patches service packs updates Service Pack for ProLiant Quality Packs (QPK) or Hardware Enablement (HWE) bundles specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull Is the minimum supported HBA driver specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull If you are using multi-path configurations with your HPE StoreEver Tape Libraries have you reviewed the HPE StoreEver Failover overview Including advanced path failovermdashTechnical white paper
bull Are the StoreEver Tape Library StoreOnce Backup System andor StoreAll Storage System online
bull If multiple FC switches are cascaded or meshed are all inter-switch link (ISL) ports correctly logged in
bull Are all of the host HBAs correctly logged into the FC andor FCoE switch
bull Are all tape library robotic devices and disk-based virtual tape systems zoned configured and presented to each host from the FC andor FCoE switch
bull If using zoning on the FC andor FCoE switch has the zone been added to the active switch configuration
bull Do you have the latest supported version of HPE Command View for Tape Libraries Software installed to manage monitor and configure all of your HPE StoreEver Tape Libraries
1 16 GB FC HBAs do not currently support private loop Brocade FC HBAs only recently began supporting private loop 2 The Windows Removable Storage Manager (RSM) service will claim tape drives and disrupt any installed backup applications Removable Storage Manager is no longer available as
of Windows 7 and Windows 2008 R2
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 4
This guide describes how to prepare Windowsreg Linuxreg HPE-UX Solaris AIX and virtual machine (VM) hosts for connecting to HPE StoreEver Tape HPE StoreOnce and HPE StoreAll disk-based virtual tape backup solutions in Fibre Channel (FC) storage area networks (SAN) and network attached storage (NAS) environments
Overview Hewlett Packard Enterprise Backup Recovery and Archive (BURA) Solutions are an integration of data protection and archiving software with industry-standard hardware providing a complete enterprise class solution Leveraging the history of our extensive partnerships with leading software companies Hewlett Packard Enterprise continues to provide software solutions that support the backup and restore processes of homogeneous and heterogeneous operating systems in a shared storage environment
Data protection and archiving software focuses on using an automated Linear Tape-Open (LTO) Ultrium tape library andor disk-based virtual tape backup solutions BURA Solutions combine the functionality and management of SANs data protection and archiving software and scaling tools to integrate tape and disk storage subsystems in the same SAN environment Enterprise data protection can be accomplished with different target devices in various configurations using a variety of transport methods such as the corporate communication network a server SCSISAS SCSI Fibre Channel over Ethernet (FCoE) or a FC infrastructure BURA Solutions typically use a SAN that provides dedicated bandwidth independent of the LAN This independence allows single or multiple backup or restore jobs to run without the network traffic caused by data protection environments Management of the data protection and archiving software occurs over the LAN while the data is sent over the SAN This achieves faster data transfer speeds and reduces Ethernet traffic Jobs and devices can be managed and viewed from either the primary server or any server or client connected within BURA Solutions that have supported data protection and archiving software installed All servers within the BURA Solutions server group can display the same devices
HPE Data Agile Partner Program Hewlett Packard Enterprise is dedicated to providing a rich portfolio of Backup Recovery and Archiving (BURA) Solutions for our customers
The HPE Data Agile Partner Program offers partners a programmatic framework to self-certify the interoperability of their applications across the entire HPE Storage portfolio of BURA productsmdashincluding HPE StoreOnce Backup HPE StoreAll Storage and HPE StoreEver Tape
The Data Agile Partner Program enables partners to learn about the HPE BURA portfolio test and certify their applications in a dedicated Hewlett Packard Enterprise lab environment and take advantage of unique marketing opportunities Program members also have access to specialized trainings and technical assistance
Provide powerful solutions to your customers and expand market opportunities through a partnership with HPE Storage Learn more at hpecomstorageDataAgile
BURA supported components Whether yoursquore looking to scale from entry-level workgroups to enterprise-level data centers the HPE Data Agile BURA Compatibility Matrix has the information you need to design data protection solutions with HPE StoreOnce Backup HPE StoreEver Tape and HPE StoreAll Storage Refer to table 1 for white papers and design guides documenting fully certified data protection and archive solutions built with HPE storage products and market-leading Independent Software Vendors (ISV) applications Learn more at hpecomstorageBURACompatibility
Technical white paper Page 5
Table 1 HPE Data Agile BURA Solution white papers
Cross platform design guides Design Guide for Backup and Archive
Example Configuration Guide for Backup and Archive
Tiered Data Retention for HPE Storage
White papersmdashProduct HPE StoreOnce Backup
HPE StoreAll Storage
HPE StoreEver Tape
HPE StoreServ 3PAR File Persona
White papersmdashData Protection and Archive Vendors AGFA Healthcare IMPAX
Citrixreg ShareFile
CommVault Simpana
GE Centricity
Genetec Security Center
HPE Consolidated Archive
HPE Control Point
HPE Data Protector
EMC Networker
IBM TSM
iTernity iCAS
McAfeereg VirusScan
Milestone XProtect
QStar Archive Manager
Veritas Enterprise Vault
Veritas NetBackup
Veritas Backup Exec
Symantec Protection Engine
Veeam Software
White papersmdashDatabases and Virtual Machines Microsoftreg Exchange
Microsoft Hyper-V
Microsoft SQL
Oracle
SAP HANAreg
VMwarereg
Supported topologies The following topologies are supported in a FC SAN with short-wave SFPs being the only FC connection supported in HPE StoreEver and HPE StoreOnce devices Any requirements for an extended SAN would require a SAN switch or router to which the StoreEver or StoreOnce devices can attach Refer to the extended SAN configuration in the HPE Backup and Archive Example Configuration Guide for more details
Technical white paper Page 6
Point-to-point Point-to-point or Direct-Attach Fibre (DAF) connections are direct FC connections made between two nodes such as a server and an attached tape library This configuration requires no switch to implement and the default for a DAF link is a private loop1 The storage devices are dedicated to a server in a point-to-point configuration
Switched fabric A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches In the FC switched fabric (FC-SW) topology devices are connected to each other through one or more FC switches Visibility among devices in a fabric is typically controlled with zoning FCoE is a computer network technology that encapsulates FC frames over Ethernet networks This allows FC to use 10GbE networks (or higher speeds) while preserving the FC protocol FCoE maps FC directly over Ethernet while being independent of the Ethernet forwarding scheme HPE Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric For standalone servers converged network adapters (CNA) are used to connect to the fabric through FCoE fabric switches A CNA also called a converged network interface controller (C-NIC) is a computer inputoutput device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC) In other words it ldquoconvergesrdquo access to respectively a SAN and a general-purpose computer network
Installation checklist Prior to installing data protection and archiving software review the questions below to ensure that all components are configured properly and logged into the SAN
bull For any Windows 2008 servers has the Windows feature Removable Storage Manager been removed or disabled2
bull Are all of the following hardware components at the minimum supported firmware revisions specified in the HPE Data Agile BURA Compatibility Matrix servers HBAs FC andor FCoE switches tape drives library robots and disk-based virtual tape systems
bull Are all recommended operating system patches service packs updates Service Pack for ProLiant Quality Packs (QPK) or Hardware Enablement (HWE) bundles specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull Is the minimum supported HBA driver specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull If you are using multi-path configurations with your HPE StoreEver Tape Libraries have you reviewed the HPE StoreEver Failover overview Including advanced path failovermdashTechnical white paper
bull Are the StoreEver Tape Library StoreOnce Backup System andor StoreAll Storage System online
bull If multiple FC switches are cascaded or meshed are all inter-switch link (ISL) ports correctly logged in
bull Are all of the host HBAs correctly logged into the FC andor FCoE switch
bull Are all tape library robotic devices and disk-based virtual tape systems zoned configured and presented to each host from the FC andor FCoE switch
bull If using zoning on the FC andor FCoE switch has the zone been added to the active switch configuration
bull Do you have the latest supported version of HPE Command View for Tape Libraries Software installed to manage monitor and configure all of your HPE StoreEver Tape Libraries
1 16 GB FC HBAs do not currently support private loop Brocade FC HBAs only recently began supporting private loop 2 The Windows Removable Storage Manager (RSM) service will claim tape drives and disrupt any installed backup applications Removable Storage Manager is no longer available as
of Windows 7 and Windows 2008 R2
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 5
Table 1 HPE Data Agile BURA Solution white papers
Cross platform design guides Design Guide for Backup and Archive
Example Configuration Guide for Backup and Archive
Tiered Data Retention for HPE Storage
White papersmdashProduct HPE StoreOnce Backup
HPE StoreAll Storage
HPE StoreEver Tape
HPE StoreServ 3PAR File Persona
White papersmdashData Protection and Archive Vendors AGFA Healthcare IMPAX
Citrixreg ShareFile
CommVault Simpana
GE Centricity
Genetec Security Center
HPE Consolidated Archive
HPE Control Point
HPE Data Protector
EMC Networker
IBM TSM
iTernity iCAS
McAfeereg VirusScan
Milestone XProtect
QStar Archive Manager
Veritas Enterprise Vault
Veritas NetBackup
Veritas Backup Exec
Symantec Protection Engine
Veeam Software
White papersmdashDatabases and Virtual Machines Microsoftreg Exchange
Microsoft Hyper-V
Microsoft SQL
Oracle
SAP HANAreg
VMwarereg
Supported topologies The following topologies are supported in a FC SAN with short-wave SFPs being the only FC connection supported in HPE StoreEver and HPE StoreOnce devices Any requirements for an extended SAN would require a SAN switch or router to which the StoreEver or StoreOnce devices can attach Refer to the extended SAN configuration in the HPE Backup and Archive Example Configuration Guide for more details
Technical white paper Page 6
Point-to-point Point-to-point or Direct-Attach Fibre (DAF) connections are direct FC connections made between two nodes such as a server and an attached tape library This configuration requires no switch to implement and the default for a DAF link is a private loop1 The storage devices are dedicated to a server in a point-to-point configuration
Switched fabric A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches In the FC switched fabric (FC-SW) topology devices are connected to each other through one or more FC switches Visibility among devices in a fabric is typically controlled with zoning FCoE is a computer network technology that encapsulates FC frames over Ethernet networks This allows FC to use 10GbE networks (or higher speeds) while preserving the FC protocol FCoE maps FC directly over Ethernet while being independent of the Ethernet forwarding scheme HPE Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric For standalone servers converged network adapters (CNA) are used to connect to the fabric through FCoE fabric switches A CNA also called a converged network interface controller (C-NIC) is a computer inputoutput device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC) In other words it ldquoconvergesrdquo access to respectively a SAN and a general-purpose computer network
Installation checklist Prior to installing data protection and archiving software review the questions below to ensure that all components are configured properly and logged into the SAN
bull For any Windows 2008 servers has the Windows feature Removable Storage Manager been removed or disabled2
bull Are all of the following hardware components at the minimum supported firmware revisions specified in the HPE Data Agile BURA Compatibility Matrix servers HBAs FC andor FCoE switches tape drives library robots and disk-based virtual tape systems
bull Are all recommended operating system patches service packs updates Service Pack for ProLiant Quality Packs (QPK) or Hardware Enablement (HWE) bundles specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull Is the minimum supported HBA driver specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull If you are using multi-path configurations with your HPE StoreEver Tape Libraries have you reviewed the HPE StoreEver Failover overview Including advanced path failovermdashTechnical white paper
bull Are the StoreEver Tape Library StoreOnce Backup System andor StoreAll Storage System online
bull If multiple FC switches are cascaded or meshed are all inter-switch link (ISL) ports correctly logged in
bull Are all of the host HBAs correctly logged into the FC andor FCoE switch
bull Are all tape library robotic devices and disk-based virtual tape systems zoned configured and presented to each host from the FC andor FCoE switch
bull If using zoning on the FC andor FCoE switch has the zone been added to the active switch configuration
bull Do you have the latest supported version of HPE Command View for Tape Libraries Software installed to manage monitor and configure all of your HPE StoreEver Tape Libraries
1 16 GB FC HBAs do not currently support private loop Brocade FC HBAs only recently began supporting private loop 2 The Windows Removable Storage Manager (RSM) service will claim tape drives and disrupt any installed backup applications Removable Storage Manager is no longer available as
of Windows 7 and Windows 2008 R2
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 6
Point-to-point Point-to-point or Direct-Attach Fibre (DAF) connections are direct FC connections made between two nodes such as a server and an attached tape library This configuration requires no switch to implement and the default for a DAF link is a private loop1 The storage devices are dedicated to a server in a point-to-point configuration
Switched fabric A switched fabric topology is a network topology where network nodes connect with each other via one or more network switches In the FC switched fabric (FC-SW) topology devices are connected to each other through one or more FC switches Visibility among devices in a fabric is typically controlled with zoning FCoE is a computer network technology that encapsulates FC frames over Ethernet networks This allows FC to use 10GbE networks (or higher speeds) while preserving the FC protocol FCoE maps FC directly over Ethernet while being independent of the Ethernet forwarding scheme HPE Virtual Connect FlexFabric switches are used as an option in C-series blade enclosures to support FCoE connectivity to the switched fabric For standalone servers converged network adapters (CNA) are used to connect to the fabric through FCoE fabric switches A CNA also called a converged network interface controller (C-NIC) is a computer inputoutput device that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC) In other words it ldquoconvergesrdquo access to respectively a SAN and a general-purpose computer network
Installation checklist Prior to installing data protection and archiving software review the questions below to ensure that all components are configured properly and logged into the SAN
bull For any Windows 2008 servers has the Windows feature Removable Storage Manager been removed or disabled2
bull Are all of the following hardware components at the minimum supported firmware revisions specified in the HPE Data Agile BURA Compatibility Matrix servers HBAs FC andor FCoE switches tape drives library robots and disk-based virtual tape systems
bull Are all recommended operating system patches service packs updates Service Pack for ProLiant Quality Packs (QPK) or Hardware Enablement (HWE) bundles specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull Is the minimum supported HBA driver specified in the HPE Data Agile BURA Compatibility Matrix installed on each host
bull If you are using multi-path configurations with your HPE StoreEver Tape Libraries have you reviewed the HPE StoreEver Failover overview Including advanced path failovermdashTechnical white paper
bull Are the StoreEver Tape Library StoreOnce Backup System andor StoreAll Storage System online
bull If multiple FC switches are cascaded or meshed are all inter-switch link (ISL) ports correctly logged in
bull Are all of the host HBAs correctly logged into the FC andor FCoE switch
bull Are all tape library robotic devices and disk-based virtual tape systems zoned configured and presented to each host from the FC andor FCoE switch
bull If using zoning on the FC andor FCoE switch has the zone been added to the active switch configuration
bull Do you have the latest supported version of HPE Command View for Tape Libraries Software installed to manage monitor and configure all of your HPE StoreEver Tape Libraries
1 16 GB FC HBAs do not currently support private loop Brocade FC HBAs only recently began supporting private loop 2 The Windows Removable Storage Manager (RSM) service will claim tape drives and disrupt any installed backup applications Removable Storage Manager is no longer available as
of Windows 7 and Windows 2008 R2
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 7
bull Has connectivity been verified using HPE Library and Tape Tools Software or operating system specific tools (Linux sg3_utils HPE-UX System Administration Manager [SAM] Solaris cfgadm AIX System Management Interface Tool [SMIT] etc)
bull Is the minimum patchservice pack level support installed for the data protection and archiving software
bull If yoursquore having issues with the FC connections they might need to be cleaned Avoid touching fingers to any surface that is being used for cleaning Recommendations for cleaning of FC cables and small form-factor pluggable (SFP) connections
ndash Air dusters are used to blow loose particles from optical fiber connector-end face or to dry up solvent (isopropyl alcohol) residue after a wet cleaning Not all air dusters are the same and optic grade should be used
ndash Use lint free wipes to gently wipe the ferrule and the end-face surface of the connector with the lint free pad Make sure the pad makes full contact with the end-face surface
ndash Use lint free wipes and Isopropyl alcohol Gently wipe the ferrule and the end-face surface of the connector with an alcohol pad Make sure the pad makes full contact with the end-face surface Then wipe the end-face surface on a dry lint free wipe
ndash In adapter feral cleaner or situ cleaning This semi-automated fiber optic cleaning method is specially designed for fiber optic connectors SFP ends They can get off contaminates that forced air will not remove An In-situ device can make a tape drive FC port worse Both 125 mm and 25 mm versions are available
HPE StoreOnce Catalyst With HPE StoreOnce Catalyst movement of deduplicated data across the enterprise is even easier Therersquos no need to deduplicate and rehydrate data at each step data can be replicated from sites to a central data center or a disaster recovery site in deduplicated form reducing network bandwidth requirements All backup and replication jobs may be seamlessly managed by the backup application at the central data center
Key features of StoreOnce Catalyst
bull Catalyst over Fibre Channel provides all the ISV control and source side deduplication benefits of current StoreOnce Catalyst but via Fibre Channel fabric
bull Federated Catalyst allows Catalyst stores to span nodes simplifying backup management and optimizing available storage in large environments
bull Catalyst stores allow backup applications to utilize low-bandwidth deduplication (server-side deduplication on3) or high-bandwidth deduplication (server-side deduplication off4)
HPE StoreOnce Catalyst delivers a single integrated enterprise-wide deduplication algorithm It allows the seamless movement of deduplicated data across the enterprise to other StoreOnce Catalyst systems
For more detailed information regarding which Catalyst features are supported by each backup software or applications reference the Catalyst Apps Support section under StoreOnce Backup Systems in the HPE Data Agile BURA Compatibility Matrix
3 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup application integrated Catalyst plug-in will
perform deduplication at the backup server before backup data is sent to the StoreOnce appliance 4 By specifying the Primary Transfer Policy as Low-Bandwidth on the Catalyst store defined on the StoreOnce appliance the backup server does not deduplicate the data data
deduplication occurs at the StoreOnce appliance
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 8
Preparing SAN switches and hosts for failover with HPE StoreEver Tape Libraries Hewlett Packard Enterprise provides High Availability Failover features for HPE StoreEver ESL G3 Tape Libraries and the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 and MSL8096 Tape Libraries with HPE StoreEver LTO-5 Ultrium and later generation FC tape drives Failover features are not supported on the HPE StoreEver EML and ESL E-Series Tape Libraries
Basic failover bull Supported on HPE StoreEver LTO-5 and LTO-6 Ultrium FC tape drives as data path failover requires a dual-ported drive
bull Supported by a combination of tape drive and library firmware features to create a new FC path to a drive or library if the original path is lost
bull Most applications recognize the new path and some applications will automatically retry commands after the original path is lost Some applications might require user intervention to begin using the new path
bull Is available for the HPE StoreEver MSL2024 MSL4048 MSL6480 MSL8048 MSL8096 and StoreEver ESL G3 Tape Libraries
Advanced failover bull Supported only on HPE StoreEver LTO-6 Ultrium FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available for the HPE StoreEver ESL G3 and StoreEver MSL6480 Tape Libraries
bull Is not available for the StoreEver 18 G2 Tape Autoloader nor the StoreEver MSL2024 MSL4048 MSL8048 or MSL8096 Tape Libraries
LTO-7 failover bull Supported on LTO-7 and later generation FC tape drives
bull Requires host driver support in addition to tape drive and library firmware features to manage multiple paths across multiple SANs present a single drive or library path to applications and automatically transfer commands to the new path if the original path is lost
bull The transfer to the failover path is invisible to most applications avoiding the need for user intervention
bull Is available only for ESL G3 Tape Libraries MSL6480 support is expected in 2016
Prerequisites for using basic data and control path failover bull The library drive ports must be attached to a FC SAN that supports N_Port ID Virtualization (NPIV) and NPIV must be enabled (most recent
switch firmware versions for Brocade have NPIV enabled by default) Refer to the vendor documentation for your switch regarding commands to verify if NPIV is enabled
ndash To enable or verify NPIV on a Brocade switch running Fabric OS version 6 or newer using Brocade Web Tools GUI click on the Port Admin tab Select the FC or FCoE port you want to configure From the context tabs at the top of the Web Tools GUI select View then choose Advanced For the port selected under the General tab you should see all of the details for the port including NPIV Enabled with a value of true There should also be an NPIV tab with a drop down list of Enable Disable and Max Login Select Enable if NPIV was not already enabled
ndash While all current Cisco switches support NPIV most do not have NPIV enabled by default The Cisco MDS 9148 may disable NPIV when power cycled To enable NPIV on a Cisco switch use Cisco_Device_ManagergtAdmingtFeature_Control or use the Cisco CLI commands to show NPIV status and NPIV enable
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 9
bull StoreEver Ultrium tape drives with an 8 GB connection need the fill word set to arb(ff) With 4 GB connections set the fill word to idle Refer to vendor documentation for your switch regarding commands to set the fill word for a single port For a Brocade switch running Fabric OS version 6 or newer the following command can be used to verify the configuration including the fill word for port 27
portcfgshow 27
To set the fill word for port 27 to arb(ff) using the numeric mode notation run the following command
portcfgfillword 27 35
To set the fill word for port 27 to idle using the numeric mode notation run the following command
portcfgfillword 27 0
Refer to the section titled Hardware-specific requirements for basic failover in the HPE StoreEver Tape Libraries Failover User Guide for Brocade and Cisco firmware recommendations
bull The drive port FC topology must be in Fabric mode and the switch side must be set to F-port or Auto Sense
bull The host FC port must have a physical path to both the first port and secondary (passive) port on the FC drive
bull For basic data path failover with port zoning the host FC port and both FC ports on the drive need to be within the same zone for failover to work
bull For basic data path failover with World Wide Port Name (WWPN) zoning the host FC WWPN and a single FC port on the drive need to be in the zone
bull For basic control path failover with port zoning the host FC ports and the FC ports on both the active and secondary drive chosen for basic control path failover will need to be in the same zone
bull For basic control path failover with WWPN zoning the host FC WWPN and basic control path failover WWPN assigned to the library must be in the same zone The Library WWPN is not the same as the WWPN of the drive that is hosting the library
bull Hosts connecting to the library may need to be rebooted if the operating system does not support dynamic device detection
bull Applications on hosts may need to be reconfigured to recognize the new library World Wide Name (WWN)
Prerequisites for using LTO-7 failover or advanced data and control path failover bull For LTO-7 failover when using two FC Host Bus Adapters in a server both FC HBAs must be of the same manufacturer The LTO-7 failover
driver does not work correctly if the HBAs are different
bull For advanced data path failover and LTO-7 failover the host must have a physical path to both the first port and secondary port on the FC drive For full failover capabilities the two drive FC ports should be connected to different switches and the host FC ports should also be connected to the same two switches
bull All drive ports must be zoned in the respective switches
For detailed information on using failover with HPE StoreEver Tape refer to the HPE StoreEver Tape Libraries Failover User Guide
5 Numeric mode 3 attempts hardware arbff-arbff (mode 1) first If the attempt fails to go into active state this command executes software idle-arb (mode 2) Mode 3 is the
preferable to modes 1 and 2 as it captures more cases
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 10
Native backup commands Native backup commands (see table 2) are limited in their ability to handle complicated backups and restores in multi-host SANs They are not guaranteed to provide robust error handling or increase performance throughput Use of these commands in a user developed script is not recommended with HPE StoreEver Tape Libraries or HPE StoreOnce and HPE StoreAll disk-based backup solutions in shared storage environment
Caution Native backup commands do not support SCSI reserverelease therefore using backup commands or scripts during backup or restore operations could result in data loss in an environment where the devices used for backups are shared
Table 2 Supported native commands
SUPPORTED UTILITIES HPE-UX SOLARIS AIX LINUX WINDOWS
Tape drive commands
tar Yes Yes Yes Yes No
dd (dump) Yes Yes Yes Yes No
pax Yes Yes Yes Yes No
mt Yes Yes Yes Yes No
make_tape_recovery Yes No No No No
Library and auto-changer commands
mc Yes No No No No
mtx No No No Yes No
Linear Tape File System The Linear Tape File System (LTFS) makes tape self-describing file-based and easy-to-use while allowing users to use standard file operations to access manage and share files on tape with an interface that behaves like a hard disk In addition LTFS provides the ability to share data across platforms as you would with a USB drive or memory stick LTFS is currently supported on Windows Mac and Linux HPE StoreOpen Standalone and HPE StoreOpen Automation are a set of utilities that provide easy installation configuration and management of a tape drive or library for use with LTFS To use HPE StoreOpen simply connect your tape drive or tape library to a supported host following the information noted in the respective sections within this guide prior to the installation of the HPE StoreOpen software Information and download links for native LTFS drivers source code HPE StoreOpen Standalone and HPE StoreOpen Automation can be found at hpecomstorageLTFS
Software utilities that may disrupt solution connectivity Software utilities common to SAN environments can interfere with backup and restore operations These utilities include system management agents monitoring software and tape drive and system configuration utilities A list of known software utilities and the operating systems on which they are found is shown table 3
Caution Use of software utilities during backup or restore operations could result in data loss
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 11
Table 3 HBA and software utilities
WINDOWS HPE-UX LINUX SOLARIS AIX
HBA configuration utilities
Emulex OneCommand Manager (OCM) HBAnyware
Emulex OCM HBAnyware Emulex OCM HBAnyware
QLogic QConvergeConsole (QCC)
QCC QCC
QLogic Host Connectivity Manager (HCM)
QLogic HCM
Broadcom Advanced Control Suite 3 (BACS3)
BACS3
Other software utilities
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Library and Tape Tools utility
HPE Systems Insight Manager (SIM) management agents
HPE-UX 1123
System Administration Manager (SAM)
HPE-UX 1131
System Management Homepage (SMH)
SCSI Generic (SG) commands System Management Interface Tool (SMIT)
Windows Serverreg backup
Removable Storage Manager (RSM)6
FCFCoE switch zoning recommendations Due to complexities in multi-hosting tape devices on SANs Hewlett Packard Enterprise strongly recommends using switch zoning tools to keep the backup restore and archive environment simple and less susceptible to the effects of ldquochattyrdquo changing or problematic SANs Zoning provides a way for servers disk arrays and tape controllers to only see the hosts and targets they need to see and use
The benefits of zoning include but are not limited to
bull The potential to greatly reduce target and logical unit number (LUN) shifting
bull Reducing stress on backup devices by polling agents
bull Reducing the time it takes to debug and resolve anomalies in the backup restore and archive environment
bull Reducing the potential for conflict with untested third-party products
6 Removable Storage Manager is no longer available as of Windows 7 and Windows 2008 R2
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 12
Zoning may not always be required for configurations that are small or simple ie single switch or single inter-switch link (ISL) Zoning can be helpful in larger SANs for simplifying device discovery and reducing chatter between devices Hewlett Packard Enterprise recommends the following for determining how and when to use zoning
bull Use zoning by HBA port Zoning by HBA port is implemented by creating a specific zone for each server or host by World Wide Port Name (WWPN) and adding only those storage elements to be utilized by that host Zoning by HBA port prevents a server from detecting any other devices or servers on the SAN and it simplifies the device discovery process
bull Disk and tape on the same HBAs is supported For larger SAN environments it is recommended to also add storage-centric zones for disk and backup targets This type of zoning is done by adding overlapping zones with disk and backup targets separated See figure 1 and figure 2 below
bull FC zoning can be implemented using physical switch port numbers WWN IDs or user-defined switch aliases It is important to note that physical ports and aliases can change due to recabling or switch config restores but WWN IDs do not Hewlett Packard Enterprise recommends zoning using WWN IDs
The figures below represent example configurations but are not exhaustive
Figure 1 Storage centric zoning same HBA port (overlapping zones)
Figure 2 Storage centric zoning redundant paths Also applies to dual-port HBAs and tape drives
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 13
Configuration and operating system details Windows Server Windows 2008 Windows 2008 R2 Windows 2012 and Windows 2012 R2
Installing the HBA device driver All HPE ProLiant server software firmware and drivers for Windows servers can be updated using the latest HPE Service Pack for ProLiant (SPP) from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system that will be updated
7 Expand Application (Entitlement Required)mdashSystem Management then select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment instructions and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support agreements enable access to select downloads or site functions
11 Booting your Windows Server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the operating system then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 14
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Windows Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel and click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements Supported Devices and Features or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing and configuring Microsoft iSCSI Initiator Microsoft iSCSI Initiator is installed natively on Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 On these operating systems no installation steps are required
To connect to an iSCSI target device by using Quick Connect
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 On the Targets tab type the name or the IP address of the target device in the Quick Connect text box and then click Quick Connect The Quick Connect dialog box is displayed
5 If multiple targets are available at the target portal that is specified a list is displayed Click the desired target and then click Connect
6 Click Done
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 15
To connect to an iSCSI target by using advanced settings
1 Click Start type iSCSI in Start Search and then under Programs click on iSCSI Initiator
2 On the User Account Control page click Continue
3 If this is the first time that you have launched Microsoft iSCSI Initiator you receive a prompt that says the Microsoft iSCSI service is not running You must start the service for Microsoft iSCSI Initiator to run correctly Click on Yes to start the service The Microsoft iSCSI Initiator Properties dialog box opens and the Targets tab is displayed
4 Click the Discovery tab
5 To add the target portal click Discover Portal and then in the Discover Portal dialog box type the IP address or name of the target portal to connect to If desired you can also type an alternate TCP port to be used for the connection
6 Click OK
Installing the HPE StoreEver Tape drivers Both the HPE tape and HPE changer drivers for Windows must be installed before the advanced path failover drivers are installed The tape and changer drivers bundle can be downloaded then installed as follows
1 Go to hpecomstoragetapecompatibility
2 Under Tape tools select HPE StoreEver Tape Drivers
3 A webpage will open with RECOMMENDED HPE StoreEver Tape Drivers for Windows displayed
4 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
5 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
6 Click on the Download tab then save the file
7 Follow the installation instructions from step 5 to install the HPE tape and HPE changer drivers
8 After installation of the tape and changer drivers use Windows Device Manager to confirm that all of the configured paths are accessible to the operating system If the expected number of paths are not available check the host and SAN configuration After all of the expected paths are available to the host the advanced path failover drivers can be installed
Installing the HPE StoreEver Tape advanced path failover drivers Windows (2008 R2 2012 and 2012 R2) 1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 With the Download options tab selected click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Click DrivermdashStorage Tape
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your Windows operating system
10 Click Select to continue An HPE Passport account is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 16
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Windows that is installed on your server
13 If you saved the file double-click on the file to launch the installer for the Tape Upper Bus Storage Filter driver
14 Restart when requested
15 After the system restarts the installer will continue installing the Tape Multi-Path Intermediate Class driver The installation process creates a directory CProgram FilesHewlett-PackardFailover
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Windows or the HPE StoreEver Tape drivers for Windows refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Windows advanced path failover drivers
Windows Server best practices Persistent binding Target and LUN shifting can occur with Windows operating systems when disk or tape devices are connected or disconnected a device is busy during discovery or a device failure occurs causing that device to no longer been seen on the SAN Hewlett Packard Enterprise strongly recommends using zoning by HBA port to resolve target and LUN shifting Refer to the earlier section FCFCoE switch zoning recommendations for more on zoning by HBA port
The method in which the Windows operating system enumerates devices is the cause of most target and LUN shifting Windows enumerates devices as they are discovered during a scan sequence They are enumerated with device handles such as such TAPE0 TAPE1 and so on The Windows device scan sequence goes in the order of bus target and LUN
bull Bus is the HBA PCI slot
bull Target is representative of a WWN
bull LUN is representative of a device behind the WWN
The order of discovery is
bull The lowest bus (bus 0)
bull Target 0 on bus 0
bull The LUNs on target 0
bull Target 1 and so on until all targets connected to that HBA are discovered
The process continues on to the next HBA and its targets and LUNs A common cause for device shifting is a busy tape device A busy tape device cannot respond in time for Windows to enumerate it The device is essentially skipped in the enumeration sequence thus shifting all other device numbers
Note Emulex OneCommand Manager Application Kit and QLogic QConvergeConsole Utility both have proprietary persistent binding options Review the appropriate vendor documentation for details
Data protection and archiving software can also communicate with a tape device by using the Windows device name As noted the device name may shift and cause a problem for the data protection and archiving software Some data protection and archiving software monitors for this condition and will adjust accordingly Other data protection and archiving software must wait for a server reboot and subsequently scan for devices Alternatively the data protection and archiving software must be manually reconfigured to match the current device list If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 17
Red Hat and SUSE Linux Server RHEL 5 Update 11 (ASESWS) RHEL 6 Update 6 (ASESWS) RHEL 7 (ASESWS) SLES 11 SP3 (x86x64) SLES 12
Note Hewlett Packard Enterprise recommends installing the kernel development option (source code) when installing any Linux server Availability of source code ensures the ability to install additional device support software that will be compiled into the kernel
Installing the HBA drivers All HPE ProLiant server software firmware and drivers can be updated using the latest SPP from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter Service Pack for ProLiant
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system that will be updated
7 Under Application (Entitlement Required)mdashSystem Management select the HPE Service Pack for ProLiant (American International) hyperlink
8 Below the details for the software you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the SPP Be sure to copy or save the installation instructions
Note A hyperlink to the HPE Service Pack for ProLiant Release Notes is provided within the Release Notes tab The HPE Service Pack for ProLiant Release Notes provide detailed instructions regarding the SPP including a summary of changes compatibility details for migrating from an older version of the SPP supported operating systems requirements component prerequisites deployment options and known limitations
10 Click on the Obtain software hyperlink above the various tabs to download the Service Pack for ProLiant to your server
Note To download the HPE Service Pack for ProLiant you must have
1 An HPE Passport account (a sign-in link is provided)
2 Either a warranty HPE Care Pack or support agreement linked to your HPE Support Center profile
Click on the various links that are provided for more information on how warranties HPE Care Packs and support enable access to select downloads or site functions
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 18
11 Booting your server to the SPP (offline mode) will allow you to upgrade firmware for any or all components that are flagged as requiring an upgrade while booting to the OS then running the SPP (online mode) will allow you to install any or all Hewlett Packard Enterprise related drivers and software packages
Note Please refer to the HPE Service Pack for ProLiant Release Notes which are referenced above if any issues are encountered when installing the SPP Specifically review the sections Deployment Instructions and Components Changes
12 A reboot might be required following the SPP installation
To manually install the latest HPE-supported Brocade Emulex or QLogic driver kit from the HPE support website
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q SN1000Q CN1100E SN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version that is installed on the ProLiant system in which the HBA is installed
7 Expand DrivermdashStorage Fibre Channel then click on the appropriate driver hyperlink (if more than one version of the driver is listed verify the latest supported version listed in the latest HPE Data Agile BURA Compatibility Matrix)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the driver Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the file to your server
11 A reboot might be required following the driver installation
Installing the Linux OPEN-iSCSI module You can install and use the iscsi-initiator-utils package (Red Hatreg) or open-iscsi module (SUSE) Download and install either of the packages using your distributionrsquos package manager (yum or YaST for example) Detailed instructions for iscsiadm can be found in the iscsiadm man documents
Prior to discovering available iSCSI target devices on an HPE Storage System for a Linux server the target requires the Linux server iSCSI initiator name This name is found in the etciscsiinitiatornameiscsi file
Once the iSCSI initiator name has been determined to discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System
iscsiadm --mode discovery --type sendtargets ndash-portal xxxx
To connect to the HPE Storage System targets type the following command for each discovered target where target_name is the name returned by the iscsiadm discovery
iscsiadm --mode node -T target_name --login --portal xxxx
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 19
Storage HBAs with Linux servers Hewlett Packard Enterprise supports the Linux in-box driver (the driver supplied with the operating system distribution) However not all hardware configurations support the in-box driver To determine if your configuration supports the in-box driver see the HBAs CNAs Flex Fabric Adapters and Server LOMs Support Matrix Linux Citrix VMware and Windows which is available on the HPE SPOCK website You must sign up for an HPE passport to access SPOCK
Whether you are using the in-box drivers or the out-of-box drivers Hewlett Packard Enterprise recommends that you install the HPE Fibre Channel Enablement Kit because it provides additional libraries and configuration utilities to enable HPE Fibre Channel Storage Arrays to work properly with Linux
Note If you are using any HPE management applications you need the HBA API libraries that come with the HPE-fc-enablement RPM
Downloading and installing the Fibre Channel Enablement Kit for Linux
Note There has been a change to the enablement kits released after 29 April 2014 They are now vendor specific kits Hewlett Packard Enterprise recommends that you uninstall any previous kits prior to installation of the latest version of the enablement kit
1 From an Internet browser go to hpecom
2 Click on Support
3 Under Product Support click HPE Servers Storage and Networking
4 In the Enter product name or number box enter your HBA model (for example AP770A AJ763A AJ764A CN1000Q CN1100E or NC553m) and click Go
5 Click Get drivers software amp firmware
6 Select the Linux Server operating system version of the ProLiant system in which the HBA is installed
7 Select the SoftwaremdashStorage ControllersmdashFC HBA hyperlink and click on the HPE Fibre Channel Enablement Kit for Linux (American International)
8 Below the details for the driver you should notice several tabs Select the Release Notes tab then scroll to verify if there are any Upgrade Requirements or to view additional information
9 Select the Installation Instructions tab to verify how to install the HPE Fibre Channel Enablement Kit for Linux Be sure to copy or save the installation instructions
10 Click on the Download tab to copy the RPM file to your server
11 Browse to the directory that you downloaded the RPM to
12 Follow the Installation Instructions that you copied or saved in step 6
13 A reboot is required after the installation for the updates to take affect and hardware stability to be maintained
14 Verify that the host has successfully discovered all the expected devicesmdashtape drives library robotic devices and disk-based backup devicesmdashusing one of the following methods
ndash Review the devices listed from running the command cat procscsiscsi
ndash Review the output from the sg_inq command which requires that sg3_utils is installed for any of the devsg devices listed from the output of the sg_map command See figure 3 as an example
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 20
Figure 3 Verifying devices using sg_map and sg_inq commands
Installing the HPE StoreEver Tape advanced path failover drivers Red Hat Enterprise Linux Server 62 (x86_64) 63 (x86_64) 64 (x86_64) 65 (x86_64) and 66 (x86_64) The advanced path failover drivers for Linux replace the normal SCSI Tape and SCSI Generic drivers The advanced path failover drivers for Linux pass all SCSI commands for devices that do not support advanced path failover through the same code path that is followed when the standard drivers are loaded as well as route commands for devices that do support failover through the new PFO driver
1 Go to hpecomsupportstorage
2 Select Tape Storage
3 Click Enterprise Class Tape Libraries (for ESL G3) or Tape Libraries (for MSL6480)
4 Click HPE StoreEver ESL G3 Tape Libraries or HPE StoreEver MSL6480 Tape Library
5 In the Download options tab click Get drivers software amp firmware
6 For the ESL G3 select your product For MSL6480 skip to the next step
7 Under Operating systems select OS Independent
8 Expand DrivermdashStorage Tape then select the appropriate driver hyperlink
9 Click Obtain software for the HPE StoreEver High Availability Failover Driver for your operating system
10 Click on Select An HPE Passport account (a sign-in link is provided) is required
11 After logging in using your HPE Passport complete the required fields then read and accept the software license agreement Click Next
12 On the following page select the Download tab for the HPE StoreEver High Availability Failover Driver for the version of Red Hat that is installed on your server
13 To install the drivers run the following command
rpm ndashivh ltfilenamegtrpm
14 In some cases the server will need to be rebooted to complete the installation Check the instructions provided by the RPM file output and reboot the server if requested
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 21
15 The driver revision number indicates the build data of the driver and can be viewed by running
cat procscsisgversion
16 You can view the status of a device that is controlled by the failover driver by reading a file in the sys file system For example to see the path status for devsg3
cat sysclasspfopfo3paths
Enabling advanced path failover on a device while the driver is running If a device has any advanced path failover feature disabled when advanced path failover is enabled the device will reset itself removing the old dev file When the device comes back up it will be recognized as an advanced path failover device It will then operate normally as an advanced path failover device It may not have the same dev file name as before the change
Disabling advanced path failover on a device while the driver is running Disabling advanced path failover while a device is running is not recommended because the paths will not be cleanly removed and then reassociated If advanced path failover is disabled on any device the Linux server will need to be rebooted When possible
1 Power down the Linux server cleanly
2 Disable advanced path failover on the device
3 Boot the Linux server
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for Red Hat Linux Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using Linux advanced path failover drivers
Additional SG device files In most environments the default number of SG device files is sufficient to support all of the required devices In larger SAN environments if the default number of SG device files is fewer than the combined total of disk and tape devices being allocated to the server then additional device files need to be created SG device files are preferable to the standard symbolic tape (ST) device files due to SCSI timeout values that may not be sufficient in length to support some tape operations
To create additional SG device files perform the following mknod devsgX c 21 X
X signifies the number of the device file that does not already exist For additional command options see the mknod man page
Red Hat and SUSE Linux Server best practices Rewind commands being issued by rebooted Linux hosts Device discovery that occurs as part of a normal Linux server boot operation can cause a SCSI rewind command to be issued to all attached tape drives if the data protection and archiving software does not employ SCSI reserverelease and the rewind command is received while the tape drive is busy writing The result is a corrupted tape header and an unusable piece of backup media
This issue could manifest itself in several ways
bull A failed verify operation
bull A failed restore operation
bull The inability to mount a tape and read the tape header
If a backup verification is not completed the normal backup process might not detect that an issue exists
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 22
Tape devices not discovered and configured across server reboots Tape drives can disappear from Linux servers after the host reboots when using older versions of the HBA drivers Adding the line ldquohp_rescan -ardquo to etcrcdrclocal resolves the issue The utility hp_rescan was previously included and installed with older versions of the HPE Linux FCoEFC Driver Kit
Note The latest versions of the Fibre Channel Enablement for Linux kits no longer include the HPE fibreutils package which contains the hp_rescan utility The fibreutils package can manually be downloaded and installed by following steps 1ndash6 from the section Downloading and installing the Fibre Channel Enablement Kit for Linux For step 5 select SoftwaremdashStorage Controllers-FC HBA
This issue which affects Red Hat installations and intermittently some SUSE Linux installations is understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation products The permanent resolution to this issue is to upgrade to the latest FC driver kit
Enable iSCSI target devices to remain persistent across system reboots To enable the iSCSI target devices to remain persistent across system reboots the open-iscsi service must be configured to run at system startup This can be done by issuing the following command
For Red Hat 7 and SUSE 12 systemctl enable iscsidservice systemctl restart iscsidservice
For earlier versions of Red Hat chkconfig iscsi on
For earlier versions of SUSE chkconfig open-iscsi on
To verify that this configuration change has been accepted run the following command
For Red Hat 7 and SUSE 12 systemctl -a | grep iscsi iscsi-shutdownservice error inactive dead Logout off all iSCSI sessions on shutdown iscsiservice loaded inactive dead Login and scanning of iSCSI devices iscsidservice loaded active running Open-iSCSI iscsiuioservice loaded active running iSCSI UserSpace IO driver iscsidsocket loaded active running Open-iSCSI iscsid Socket iscsiuiosocket ` loaded active running Open-iSCSI iscsiuio Socket
For earlier versions of Red Hat chkconfig --list iscsi iscsi 0off 1off 2off 3on 4off 5on 6off
SUSE servers chkconfig --list open-iscsi open-iscsi 0off 1off 2off 3on 4off 5on 6off
LUNs shifting after reboot The Linux 26 kernel and later enhanced the management of the attached devices through the introduction of udev The udev device manager provides users with a persistent naming process for all devices across reboots For details on how to configure udev refer to the appropriate Linux distribution documentation
If your data protection and archiving software requires persistent device mapping use the softwarersquos device configuration wizard to ensure proper configuration
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 23
Recommended changes to queue depth and timeout values Changes to queue depth and timeout values are recommend when operating HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape Libraries with Linux-based hosts Recommended changes are as follows
bull Queue Depth
The queue depth when operating the MSL6480 or MCB Version 1 ESL G3 libraries should be set to one (1) as there is only one active robot to complete Move Medium commands With dual-robot MCB Version 2 ESL G3 libraries the queue depth should be set to two (2) as the library has two active robots that can complete Move Medium commands
HPE LTO drives are capable of handling command queues of four or five commands but if hosts continue to send commands past that amount the drive or library being hosted by that drive will start to return status messages saying that the queue is full and the host should wait 500 ms If the host doesnrsquot stop sending commands at this point the delays in returning status for commands can be long enough that the drive appears hung As such care should be taken to ensure that the queue depth is the correct length to avoid this scenario preferably by using the recommend queue depths provided above
With Linux-based hosts this command can let you see what the queue depth is set to for each generic SCSI device find sysclassscsi_genericdevicequeue_depth -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
bull Timeouts
Most applications increase the timeout value for motion commands but will continue to rely on the default timeout value for Read Element Status commands This can especially be of concern in larger partitioned libraries where multiple Read Element Status commands to the different partitions are outstanding and the default timeout value is encountered before a response is provided Depending on whether udev rules are in effect or not the default timeout value on Linux-based hosts tends to be either thirty (30) or sixty (60) seconds Given all of the above it is recommended that the default timeout value be changed to twenty minutes with all Linux-based hosts operating HPE LTO drives in HPE StoreEver ESL G3 Tape Libraries and HPE StoreEver MSL Tape libraries in order to allow for multiple commands to complete successfully without hitting the default timeout value
With Linux-based hosts this command can let you see what the default timeout value is currently set to find sysclassscsi_genericdevicetimeout -exec grep -H
If required the sg_inq command can help correlate scsi generic mapping to specific devices sg_inq devsg
Note is the sg provided in the output from the previous command
The detailed procedure for making the recommended changes to the command queuing and default timeout values for Linux-based hosts can be viewed in the following Engineering Advisory HPE StoreEver ESL G3 Tape Libraries MSL Tape Libraries and 18 G2 AutoloadersmdashRecommended Changes to Queue Depth and Timeout Values With Linux-Based Hosts
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 24
HPE-UX Server Installing HBA drivers in the kernel HPE-UX 11i v2 (1123 IA-64) 1 The drivers schgr sctl and stape must all be installed in the kernel To see if these drivers are installed enter the following command
usrsbinkcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause schgr static explicit sctl static depend stape unused
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module run the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
HPE-UX 11i v3 (1131 IA-64) 1 The drivers schgr sctl stape eschgr esctl and estape must all be installed in the kernel To see if these drivers are installed enter the
following command
usrsbinkcmodule sctl esctl schgr eschgr stape estape
The following example shows output from kcmodule where the stape driver is not installed
Module State Cause sctl static best sctl static depend schgr static best eschgr static best stape unused estape static best
If one or more of the above drivers is in the unused state they must be installed in the kernel If they are all installed (static state) proceed to the next section Final host configurations
2 Use kcmodule to install modules in the kernel For example to install the stape module use the following command
usrsbinkcmodule stape=static
Enter yes to back up the current kernel configuration file and initiate the new kernel build
3 Reboot the server to activate the new kernel
cd
usrbinshutdown -r now
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 25
Installing the HPE-UX iSCSI Software Initiator The iSCSI Software Initiator is located at the HPE Software Depot
1 Go to softwarehpcom
2 Enter iSCSI Software Initiator in the Search Software Depot box located on the upper right side of the website
3 When the search results show iSCSI Software Initiator click on Select An HPE Passport account (a sign-in link is provided) is required
4 After logging in using your HPE Passport complete the required fields scroll down then read and accept the software license agreement for the order Click Next
5 Under Documentation click on the Download tab for the Installation Instructions to download instructions for using the Software Distributor tool to install the iSCSI Software Initiator
6 Under Software click on the Download tab for the iSCSI Software Initiator version that you would like to download
7 After installing the iSCSI Software Initiator and rebooting you can verify that the installation was successful by running the following command
swlist iSCSIndash00
If the HPE-UX iSCSI Software Initiator is installed correctly the output will be HPE-UX 1123 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B112303e HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B112303e HPE-UX iSCSI Software Initiator
HPE-UX 1131 Initializing Contacting target ldquolocalhostrdquo Target localhost iSCSI-00 B113101 HPE-UX iSCSI Software Initiator iSCSI-00ISCSI-SWD B113101 HPE-UX iSCSI Software Initiator
Final host configurations 1 Run ioscan to verify that the host detects the tape devices
ioscan
For HPE-UX 1123 legacy device special files (DSFs) or persistent DSFs run the following commands
ioscan -fnkC tape
ioscan -fnkC autoch
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 26
2 For HPE-UX 1131 persistent DSFs run the following commands
ioscan -fnNkC tape
ioscan -fnNkC autoch
Note Some data protection and archiving software might not currently support HPE-UX 1131 persistent DSFs for tape Review your data protection and archiving software documentation for more information
3 To verify that the host detects iSCSI devices issue the ioscan command as follows for HPE-UX 1123
ioscan -fnC iscsi
Issue the ioscan command as follows for HPE-UX 1131
ioscan -fnNC iscsi
If the software is installed correctly the generated output will look similar to this Class I HW Path Driver SW State HW Type Description
=====================================================================
iscsi 0 2550 iscsi CLAIMED VIRTBUS iSCSI Virtual Node
4 If no device files have been installed enter the following command
insf -C tape -e
insf -C autoch -e
The command line tool for configuring the HPE-UX iSCSI Software Initiator is iscsiutil Detailed instructions for iscsiutil can be found in the iscsiutil man documents If using iscsiutil to configure the HPE-UX iSCSI Software Initiator add the path for iscsiutil and other iSCSI executables to the root path PATH=$PATHoptiscsibin
You should now be able to find the iSCSI initiator node for the HPE-UX host iscsiutil -l
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiutil -aI xxxx
Installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Advanced path failover for HPE-UX is implemented by updating HPE-UX drivers to support advanced path failover with the LTO-6 tape drives The drivers function as both failover and non-failover drivers
The updated drivers are
bull HPE-UX tape driver (estape)mdashused for data path failover
bull HPE-UX media changer driver (eschgr)mdashused for control path failover
bull HPE-UX SCSI stack driver (esctl)mdashused for data path and control path failover
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 27
To download and install the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131
1 Get the latest HPE-UX patches from h20566www2hpcomportalsitehpscpatchhome
Note To access and download HPE-UX patches you must have
1 An HPE Passport account (a sign-in link is provided)
2 An active HPE support agreement linked to your HPE Support Center profile The active Hewlett Packard Enterprise support agreement must
ndash Cover the specific HPE Operating Systems linked to your HPE Support Center user profile
ndash Include software updates or previous version support privileges
Links are provided to view your current patch access privileges or to contact Hewlett Packard Enterprise
2 To locate the patches search for estape eschgr and esctl or the patch number and then look at the Prepby field to see if there is a superseding patch
3 To install the advanced path failover drivers use the standard HPE-UX kernel patch installation process to install the following patches on the HPE-UX Servers running HPE-UX 1131
ndash HPE-UX tape driver patch (estape)-PHKL_43680 or superseding patch
ndash HPE-UX media changer driver patch (eschgr)-PHKL_43681 or superseding patch
ndash HPE-UX SCSI stack (mass storage stack) driver patch (esctl)ndashPHKL_43819 or superseding patch
4 The server will automatically reboot as part of the installation process
You can use ioscan to view the tape and library (media changer) devices connected to the HPE-UX Server The device special file (DSF) is listed as the last item in the description as shown in bold type ioscan -knNfC tape ioscan -knNfC autoch
Finding the lockdown path The load-balance policy used to route data on multiple paths to a tape drive or library is called the ldquopath-lockdownrdquo policy Use the scsimgr get_info command to see the current lockdown path for a library For example scsimgr get_info -D devrchgrautoch35 STATUS INFORMATION FOR LUN devrchgrautoch38 hellip LUN path used when policy is path_lockdown = 00090010x50014382c6e4f0090x1000000000000 scsimgr get_attr -D devrtapetape28_BEST SCSI ATTRIBUTES FOR LUN devrtapetape28_BEST name = lpt_lockdown current = 00090000x100000e00222a6c10x2000000000000 default = saved = For additional information see the HPE-UX man pages scsimgr (1M) ioscan (1M) mknod (2) mksf (1M) rmsf (1M)
Troubleshooting advanced path failover for HPE-UX 1131 Advanced path failover errors are logged in the varadmsyslogsysloglog file as part of the default SCSI IO tracing function of HPE-UX You can use standard file viewing commands including cat vi dmesg - and tail -f to view the sysloglog
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 28
Enabling or disabling advanced path failover for HPE-UX 1131 Advanced path failover is disabled by default When advanced path failover is disabled the driver operates as if the device is not capable of using the advanced path failover feature
When advanced failover is enabled for the library or tape drive the device resets itself and must be opened using the device special file before the driver will recognize it as an advanced path failover device and use the failover features of the driver Opening the device is generally done by the host applications
You can enable or disable advanced path failover using the library web-based interface For instructions refer to the following sections of the HPE StoreEver Tape Libraries Failover User Guide
bull Enabling control path failover under Configuring failover for the HPE StoreEver ESL G3 Tape Libraries
bull Enabling data path failover under Configuring failover for HPE StoreEver MSL6480 Tape Libraries
When advanced path failover is disabled the passive control paths to the library will go into an error state (NO_HW) in the ioscan (1M) command output These stale entries do not affect the function of the library To clear these errors so the device can be accessed using its DSF
1 On the HPE-UX host run rmsf -H on the lunpath hardware paths that are in NO_HW state For example rmsf -H 0400010x50014380023560d40x1000000000000
2 Run ioscan -kfNH ltHBA pathgt For example ioscan -kfNH 04000
Hewlett Packard Enterprise recommends only enabling or disabling advanced path failover when the library is not opened by any applications If the advanced path failover is disabled while an application is accessing the library all of the libraryrsquos lunpaths will go offline and IO requests to the library will fail
For more detailed information related to installing the HPE StoreEver Tape advanced path failover drivers for HPE-UX 1131 Servers refer to the HPE StoreEver Tape Libraries Failover User Guidemdashspecifically the section titled Installing and using HPE-UX advanced path failover drivers
HPE-UX Server best practices HPE-UX 1131 can experience poor IO performance on VxFS file systems due to memory blocking during high system memory usage The HPE-UX 1131 kernel subsystems and file IO data cache can consume up to 90 percent of system memory during normal operation When a heavy file IO application such as data protection and archiving software starts the memory usage can reach close to 100 percent Under such conditions if VxFS attempts to allocate additional memory for inode caching this can result in memory blocking and subsequent poor file IO performance In extreme conditions this scenario can cause data protection and archiving software to time out during file system reads which could result in backup job failures
Poor IO performance resolution To avoid backup job failures due to memory blocking modify the kernel tunable parameter vx_ninode The vx_ninode parameter determines the number of inodes in the inode table to help VxFS in caching By default the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending on the amount of physical memory in the machine When modifying the value of vx_ninode HPE recommends the following
Table 4 Tuning vx_ninode
PHYSICAL MEMORY OR KERNEL AVAILABLE MEMORY VXFS INODE CACHE (NUMBER OF INODES)
1 GB 16384
2 GB 32768
3 GB 65536
gt 3 GB 131072
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 29
To determine the current value of vx_ninode run the following at the shell prompt usrsbinkctune vx_ninode
To set vx_ninode to 32768 run the following command at the shell prompt usrsbinkctune vx_ninode=32768
Note The kernel tunable parameters filecache_min and filecache_max control the amount of physical memory that can be used for caching file data during system IO operations By default these parameters are automatically determined by the system to better balance the memory usage among the file system IO intensive processes and other types or processes The values of these parameters can be lowered to allow a larger percentage of memory to be used for purposes other than file system IO caching Determining whether or not to modify these parameters depends on the nature of the applications running on the system
HPE-UX 1123 Disabling rewind-on-close devices with st_san_safe Turning on the HPE-UX 1123 kernel tunable parameter st_san_safe disables tape DSFs that are rewind-on-close This will prevent utilities like mt from rewinding a tape that is in use by another utility
Some applications or utilities require rewind-on-close DSFs (for example the frecover utility that comes with HPE-UX) In this case disabling rewind-on-close devices renders the utility unusable Most data protection and archiving software such as HPE Data Protector can be configured to use SCSI reserverelease which protects them from rogue rewinds by other utilities The requirements of your data protection and archiving environment should be considered when determining whether or not to enable st_san_safe
To determine if rewind-on-close devices are currently disabled enter usrsbinkctune st_san_safe
If the value of st_san_safe is 1 then rewind-on-close devices are disabled If the value is 0 then rewind-on-close devices are enabled To disable rewind-on-close devices enter usrsbinkctune st_san_safe=1
Oracle Solaris Server Solaris 10 Update 11 (SPARC) Solaris 10 Update 11 (x86x64) Solaris 112 (SPARC) Solaris 112 (x64)
How to enable the iSCSI Software Initiator 1 For Solaris 1011 (SPARC) and 112 (SPARC) enable the iSCSI services using the command
svcadm enable networkiscsiinitiator
2 Verify the iSCSI services are running
svcs -a | grep lsquoiscsiinitiatorrsquo
Online 101028 svc networkiscsiinitiatordefault
3 For earlier versions of Solaris enable the iSCSI services using the command
svcadm -v enable iscsi_initiator
svcnetworkiscsi_initiatordefault enabled
4 Verify the iSCSI services are running
svcs -a | grep iscsi_initiator
Online 101028 svc networkiscsi_initiatordefault
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 30
The command line tool for configuring Solaris iSCSI Software Initiator is iscsiadm Detailed instructions for iscsiadm can be found in the iscsiadm man documents If using iscsiadm to configure the Solaris iSCSI Software Initiator run the following command to find the iSCSI initiator node for the Solaris host iscsiadm list initiator-node
To discover available iSCSI target devices on an HPE Storage System type the following command where xxxx is the IP address of the HPE Storage System iscsiadm add discovery-address xxxx3260 iscsiadm list discovery-address Discovery Address xxxx3260 iscsiadm modify discovery -t enable iscsiadm list discovery List the configured iSCSI target devices using the following command iscsiadm list target
Oracle Solaris Server best practices Troubleshooting with the cfgadm utility bull Getting the status of FC devices using cfgadm
cfgadm -al
Example output for above command
This output shows a media changer at LUN 0 for the 100000e0022229fa9 WWN and tape and disk devices at LUN 0 for other WWNs The devices are connected have been configured and are ready for use
The cfgadm -al -o show_FCP_dev command can be used to show the devices for all LUNs of each Ap_Id
bull Fixing a device with an ldquounusablerdquo condition
If the condition field of a device in the cfgadm output is ldquounusablerdquo then the device is in a state such that the server cannot use the device This may have been caused by a hardware issue In this case do the following to resolve the issue
ndash Resolve the hardware issue so the device is available to the server
ndash After the hardware issue has been resolved use the cfgadm utility to verify device status and to mend the status if necessary
bull Use cfgadm to get device status cfgadm -al
ndash For a device that is ldquounusablerdquo use cfgadm to unconfigure the device and then re-configure the device For example (this is an example only your device WWN will be different) cfgadm -c unconfigure c4100000e0022286ec cfgadm -f -c configure c4100000e0022286ec
ndash Use cfgadm again to verify that the condition of the device is no longer ldquounusablerdquo cfgadm -al
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 31
IBM AIX Server AIX 61 (TL9) AIX 71 (TL3)
Installing HBA device driver Install the IBM (5729573552735758575957735774) HBA and restart the server
1 Ensure that the HBA is recognized At the shell prompt type lsdev -Cc adapter
There is a line in the output similar to the following fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized check that the correct HBA fileset (driver) is installed 6228 lslpp -L|grep devicespcidf1000f7 6239 lslpp -L|grep devicespcidf1080f9 5716 lslpp -L|grep devicespcidf1000fa 5759 lslpp -L|grep devicespcidf1000fd 5773 lslpp -L|grep devicespciexdf1000fe 5774 lslpp -L|grep devicespciexdf1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA devicespcidf1080f9diag 5101 C F PCI-X FC Adapter Device devicespcidf1080f9rte 5101 C F PCI-X FC Adapter Device
2 For information about the HBA such as the WWN execute the following command lscfg -vl fcs0
The output will look similar to the following
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 32
3 To see the version of microcode (firmware) being run by the HBA use the following command lsmcode -c -d ltdevicegt Microcode and other updates can be found at ibmcomsupportfixcentral
4 After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured configure the HBA and devices within the fabric At the prompt type cfgmgr -l ltdevicenamegt -v
Note Running the cfgmgr without a -I argument may generate a ldquodevicesfcpchangerrdquo error This is a result of the cfgmgr device scan receiving a response from the auto-changer device for which AIX does not have a specific driver Under these conditions the error message does not indicate a problem and is for information only
5 Within the command ltdevicenamegt is the name from the output of the lsdev command in step 1 such as fcs0
6 To ensure all tape device files are available at the prompt type lsdev -HCc tape
7 By default AIX creates tape devices with a fixed block length To change the devices to have variable block lengths at the prompt type chdev -l lttapedevicegt -a block_size=0
8 Configuration of the tape devices (where tape devices are rmt0 rmt1 and so on) are complete
Note HPE LTO tape drives use the IBM host tape driver When properly configured a device listing will show the tape device as follows For IBM native HBAs Other FC SCSI Tape Drive
For non-IBM native HBAs Other SCSI Tape Drive
9 To configure Fast IO Failure for Fibre Channel devices after link events in the SAN change the fast fail parameter as in the example below chdev -l fscsi -a fc_err_recov=fast_fail Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
IBM AIX Server best practices Persistent binding To prevent device shifting after a host reboot you can enable the dynamic tracking feature in earlier versions of AIX AIX 7 and above should have this parameter enabled by default
To enable dynamic tracking of FC devices set this attribute to dyntrk=yes as shown in the example chdev -l fscsi -a dyntrk=yes Within the command the in fscsi is the same number from the name from the output of the lsdev command in step 1 (eg fcs0 would be fscsi0)
Note For an IBM Virtual IO Server (VIOS) running AIX logical partitions (LPARs) when using N-Port ID Virtualization (NPIV) with AIX LPARs it is strongly recommended to upgrade VIOS to version 2234 or greater
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 33
Virtual machine support VM software is used for portioning consolidating and managing computing resources allowing multiple unmodified operating systems and their applications to run in VMs that share physical resources Each VM represents a complete system with processors memory networking storage and BIOS See table 5 for tape and disk support for virtualization products
Table 5 VM tapeVTLNAS support
VM Product StoreEver Direct Attached SCSI
StoreEver Direct Attached SAS
StoreEver FC amp FCoE SAN StoreOnce VTL
StoreOnce iSCSI VTL
StoreOnce Catalyst over Ethernet (CoE)
StoreOnce Catalyst over Fibre Channel (CoFC)
StoreOnce NAS
Support Notes
Citrix XenServer Host No No No support statement for tape at this time
Citrix XenServer Guest VM
No Yes Yes No Yes For iSCSI tape devices the iSCSI Software Initiator must run in the VM operating system
D2D SAN shares must be accessed directly in the VM operating system not attached through the hypervisor
HPEVM Host Yes No Yes Yes Yes No Yes Tape drivemedia changer must not be attached to a guest VM while being used by the host
HPEVM Guest VM Yes No Yes Yes Yes No Yes Tape drivemedia changer must only be attached to a single guest VM at a time
Hyper-V Host Yes Yes Yes Yes Yes No Yes
Hyper-V Guest VM No No No Yes Yes No Yes For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
VMware ESX Host Yes No No7 No7 No No No Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must not be attached to a VM while being used by the host HPE does not test or validate direct attached SCSI tape on VMware ESXESXi hosts and does not provide support for tape drives and tape libraries in such configurations8
7 SAN tape devices (FC and iSCSI) are not supported directly by VMware ESX host vStorage API for Data Protection use a backup server and VM software snapshots to allow FC
and iSCSI backups 8 For ESX 41 Server tape support see ESX 41 Fibre Channel SAN Configuration Guide For ESX 50 Server tape support see ESXi 50 vSphere Storage Guide For ESX 51 Server tape support see ESXi 51 vSphere Storage Guide For ESX 55 Server tape support see ESXi 55 vSphere Storage Guide
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 34
Table 5 VM tapeVTLNAS support (continued)
VMware Guest VM Yes No No Yes Yes No9 Yes Direct attached SCSI tape drive is multi-target only (no multi-LUN) and must only be attached to a single VM at a time
For iSCSI tapemedia changer devices the iSCSI Software Initiator must run in the VM operating system
D2D NAS shares must be accessed directly in the VM operating system not attached through the hypervisor
HPE does not test or validate direct attached SCSI tape on VMware guest VMs and does not provide support for tape drives and tape libraries in such configurations
VMware vStorage API for Data Protection
Yes Yes Yes Yes Yes No Yes FC SANs and shared tape devices are limited to a physical backup server
Note Be sure to do the following
bull Refer to your data protection and archiving software documentation for supported VM backup methods
bull Refer to the VM documentation for supported backup devices
VMware Server
Note VMware does not support ESXi SAN attached tape devices VMware vStorage APIs for Data Protection (VADP) with an off-host backup server can be used to manage SAN devices
bull VADP offloads backup responsibility from ESXi hosts to a dedicated backup server or servers This reduces the load on ESXi hosts VADP provides full-image backup and restore capabilities for all VMs and file based backups for Microsoft Windows and Linux VMs
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
bull VMs can also be set up for LAN backup the same as a regular client Refer to your data protection and archiving software documentation for details
bull For recommendations on VMware VM backup and recovery to HPE StoreOnce Backup go to hpecomstorageBURACompatibility scroll down to Data Agile BURA Solution White Papers then click on the VMware hyperlink across from White PapersmdashDatabases and Virtual Machines to view the associated white papers
9 Yes when using HPE StoreOnce Recovery Manager Central (RMC) only
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 35
HPE Integrity Virtual Machines Hewlett Packard Enterprise supports certifies and sells HPE Integrity Virtual Machines (HPEVM) Virtualization software on HPE Integrity servers
HPEVM is an application installed on an HPE-UX Server and allows multiple unmodified operating systems (HPE-UX Windows and Linux) and their applications to run in VMs that share physical resources
The HPE Virtual Server Environment (VSE) for HPE Integrity provides an automated infrastructure that can adapt in seconds with mission-critical reliability HPE VSE allows you to optimize server utilization in real time by creating virtual servers that can automatically grow and shrink based on business priorities and service
Note The HPE Integrity VM host and VMs do support FC SAN connected tape Virtual Library Systems (VLS) devices and HPE StoreOnce backup systems
bull Off-host backups using HPE storage array hardware mirroring or snapshots can be used to shorten the backup windows and off-load resources required for backup
bull VMs can also be set up for LAN backup the same as a regular client or media host Refer to your data protection and archiving software documentation for details
Microsoft Hyper-V
Note Hewlett Packard Enterprise does not test or support Hyper-V VMs with SAN or direct attach tape drives The Hyper-V host or a backup server can be used to manage such devices
bull The volume shadow copy service (VSS) Hyper-V writer can be used to quiesce Windows VMs and create a snapshot on the Hyper-V host volume VMs that cannot be quiesced can be placed in the Saved state before snapshot creation The snapshots are then used for image or file backup of the VMs If a VM was placed in the Saved state Hyper-V will return the VM to its original state Review your data protection and archiving software documentation for details
bull VMs can also be set up for LAN backup the same as a regular client Refer to your backup protection and archiving software documentation for details
bull VMs can also be set up as backup servers with locally mapped NAS shares or iSCSI backup devices Refer to your data protection and archiving software documentation for VM support
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver
Technical white paper Page 36
Sign up for updates
Rate this document
copy Copyright 2015 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services Nothing herein should be construed as constituting an additional warranty Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein
Citrix is a registered trademark of Citrix Systems Inc andor one more of its subsidiaries and may be registered in the United States Patent and Trademark Office and in other countries Linux is the registered trademark of Linus Torvalds in the US and other countries McAfee is a trademark or registered trademark of McAfee Inc in the United States and other countries Microsoft Windows and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States andor other countries Oracle is a registered trademark of Oracle andor its affiliates Red Hat is a registered trademark of Red Hat Inc in the United States and other countries SAP HANA is the trademark or registered trademark of SAP SE in Germany and in several other countries VMware is a registered trademark or trademark of VMware Inc in the United States andor other jurisdictions
4AA5-7983ENW December 2015
Installing backup software and patches After all components on the SAN are logged in and configured the system is ready for the installation of any supported backup software Refer to the installation guide for your particular software package or contact the vendor for detailed installation procedures and requirements After installing the backup software check with the software vendor for the latest updates and patches If any updates or patches exist for your backup software install them now
Learn more at hpecomstorageStoreEver