Top Banner
Hints and tips for implementing DS8000 in a IBM i environment Hints and Tips for implementing DS8000 in an IBM i environment Version 4: replaces all previous versions Alison Pate and Lamar Reavis IBM Advanced Technical Skills [email protected] [email protected] - 1 – Copyright IBM Corporation September 29 th 2011 http://www-03.ibm.com/support/techdocs/atsmastr.nsf Document # TD103095
31

IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Jun 24, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

Hints and Tips for implementing DS8000 in an IBM i environment

Version 4 replaces all previous versions

Alison Pate and Lamar ReavisIBM Advanced Technical Skillspateausibmcomreavisusibmcom

- 1 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

IBM i External Storage Overview

More than ever before we have a choice of storage solutions for IBM i

We have come a long ways since the introdution of IBM External Storage on IBM i From models such as the ESS F20 and ESS 800 to now the DS8800 and beyond

Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

IBM i External Storage Re-cap

IBM i and DS8000 2001 ndash 2011

Investment in the exploitation of DS8000 by IBM has continued since 2001 to the present day Some of the enhancements introduced over the past 10 years are listed below

- 2 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

DS3400

DS5000

XIV

DS8000

Native

VIOS (virtual)Any

supported storage

IBM i direc

t

VIOS

nSeriesIAInformation Archive3996

Optical Server (no longer IBM marketed)

Network

Archive

DS4000

Power System

Storwize V7000and SVC

iSCSINPIV(VIOS)

iSCSI(VIOS)

Hints and tips for implementing DS8000 in a IBM i environment

DS8800 Support Power HA-ACS integration with TPC-R IBM i-based hot data ASP balance and DB2 Media Placement exploitation

of DS8000 SSD drives Easy Tier support Power HA NPIV support for DS8000 Full support of VIOS and Power HA PowerHA + DS8000 copy services end-to-end integration Common Smart-IOA fiber for disk and tape New 4Gb 8Gb Smart fiber IO Adapter (IOPless) 61 (++ performance) Tagged Command Queuing and Header Strip Merge (+++ performance) Common one-stop POWERDS8000 support from Supportline experts 4Gb fiber Advanced lsquoDS8000 on IBM irsquo education series SAN on iSeries Redbook - 3 BRMS native support for FlashCopy environment Boot from San (2847 IOA) DSCLI - DS Command Line Interface ndash provides IBM i control Additional i5OS LUN sizes for increased flexibility SAN on iSeries Redbook ndash 2 i5OS Multipath fiber 2Gb fiber iSeries Copy Services for IBM DS8000 (aka Toolkit) Disk Magic and IBM i Performance Tools coordination Enhance ESS cache algorithms for iSeries benefits all OSrsquos SAN on iSeries Redbook - 1 Basic 1Gb Fiber connectivity

IBM i Storage Management

Many computer systems require you to take responsibility for how information is stored and retrieved from the disk units along with providing the management environment to balance disk utilization enable disk protection and maintain balanced data spread for optimum performance

When you create a new file in a UNIX system you must tell the system where to put the file and how big to make it You must balance files across different disk units to provide good system performance If you discover later that a file needs to be larger you need to copy it to a location on disk that has enough space for the new larger file You may need to move files between disk units to maintain system performance

- 3 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IBM i server is different in that it takes responsibility for managing the information in auxiliary storage pools (also called disk pools or ASPs)

When you create a file you estimate how many records it should have You do not assign it to a storage location instead the system places the file in the best location that ensures the best performance In fact it normally spreads the data in the file across multiple disk units When you add more records to the file the system automatically assigns additional space on one or more disk units Therefore it makes sense to use disk copy functions to operate on either the entire disk space or the IASP

IBM i uses a single level storage object orientated architecture It sees all disk space and the main memory as one storage area and uses the same set of virtual addresses to cover both main memory and disk space Paging of the objests in this virtual address space is performed in 4 KB pages However data is usually blocked and transferred to storage devices in bigger than 4 KB blocks Blocking of transferred data is based on many factors for example expert cache usage

DS8000 offers a comprehensive storage solution for IBM i as long as the guidelines are followed

Virtual IO Server (VIOS) Support

A new way of conecteting the DS8000 Family of products is via VIOS The Virtual IO Server is part of the IBM PowerVM editions hardware feature on IBM Power Systems The Virtual IO Server technology facilitates the consolidation of

- 4 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

network and disk IO resources and minimizes the number of required physical adapters in the IBM Power Systems server It is a special-purpose partition which provides virtual IO resources to its client partitions The Virtual IO Server actually owns the physical resources that are shared with clients A physical adapter assigned to the VIOS partition can be used by one or more other partitions

The Virtual IO Server can provide virtualized storage devices storage adapters and network adapters to client partitions running an AIX IBM i or Linux operating environment The core IO virtualization capabilities of the Virtual IO server are shown below_ Virtual SCSI_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

The storage virtualization capabilities by PowerVM and the Virtual IO Server are supported by the DS8000 series using DS8000 LUNs as VSCSI backing devices in the Virtual IO Server It is also possible to attach DS8000 LUNs directly to the client LPARS using virtual fibre channel adapters via NPIV

Further information in DS8000 Host attachment is found in the following redbook httpwwwredbooksibmcomredpiecespdfssg248887pdf

For more information in implementing VIOS with DS8000 please look in this redbookhttpwww-03ibmcomsystemsresourcessystems_i_Virtualization_Open_Storagepdf

Sizing for performance

Itrsquos important to size a storage subsystem based on IO activity rather than capacity requirements alone This is particularly true of an IBM i environment because of the sensitivity to IO performance IBM has excellent tools for modeling the expected

- 5 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

performance of your workload and configuration We provide some guidelines and general words of wisdom in this paper however these provide a starting point only for sizing with the appropriate tools

It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling advanced Copy Services functions such as Point in Time (PiT) Copy (also known as FlashCopy) or PPRC (Global Mirror and Metro Mirror) particularly if you are planning to enable Metro Mirror

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror

You will need to collect IBM i performance data Generally you will collect performance data for a weeks worth of performance for each systemlpar and send the resulting reports

Each set of reports should include print files for the following10487141048714System Report - Disk Utilization Required10487141048714Component Report - Disk Activity Required10487141048714Resource Interval Report - Disk Utilization Detail Required10487141048714System Report - Storage Pool Utilization Required

Send the report print files as indicated below (send reports as txt file format type) If you are collecting from more than one IBM i or LPAR the reports need to be for the same time period for each systemlpar if possible

DS8000 HBA

DS8800 Host Attachment

The DS8800 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged now with two options a 4-port 8 Gb per second card or an 8-port 8 Gb per second card With either the 8 port or the 4 port card the connector type is the same a Lucent Connector (LC) Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8800 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are 3153 - 8 Gb per second 4-port SW FCPFICON PCIe Adapter3157 - 8 Gb per second 8-port SW FCPFICON PCIe Adapter 3253 - 8 Gb per second 4-port LW FCPFICON PCIe Adapter 3257 - 8 Gb per second 8-port LW FCPFICON PCIe Adapter

- 6 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 2: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

IBM i External Storage Overview

More than ever before we have a choice of storage solutions for IBM i

We have come a long ways since the introdution of IBM External Storage on IBM i From models such as the ESS F20 and ESS 800 to now the DS8800 and beyond

Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

IBM i External Storage Re-cap

IBM i and DS8000 2001 ndash 2011

Investment in the exploitation of DS8000 by IBM has continued since 2001 to the present day Some of the enhancements introduced over the past 10 years are listed below

- 2 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

DS3400

DS5000

XIV

DS8000

Native

VIOS (virtual)Any

supported storage

IBM i direc

t

VIOS

nSeriesIAInformation Archive3996

Optical Server (no longer IBM marketed)

Network

Archive

DS4000

Power System

Storwize V7000and SVC

iSCSINPIV(VIOS)

iSCSI(VIOS)

Hints and tips for implementing DS8000 in a IBM i environment

DS8800 Support Power HA-ACS integration with TPC-R IBM i-based hot data ASP balance and DB2 Media Placement exploitation

of DS8000 SSD drives Easy Tier support Power HA NPIV support for DS8000 Full support of VIOS and Power HA PowerHA + DS8000 copy services end-to-end integration Common Smart-IOA fiber for disk and tape New 4Gb 8Gb Smart fiber IO Adapter (IOPless) 61 (++ performance) Tagged Command Queuing and Header Strip Merge (+++ performance) Common one-stop POWERDS8000 support from Supportline experts 4Gb fiber Advanced lsquoDS8000 on IBM irsquo education series SAN on iSeries Redbook - 3 BRMS native support for FlashCopy environment Boot from San (2847 IOA) DSCLI - DS Command Line Interface ndash provides IBM i control Additional i5OS LUN sizes for increased flexibility SAN on iSeries Redbook ndash 2 i5OS Multipath fiber 2Gb fiber iSeries Copy Services for IBM DS8000 (aka Toolkit) Disk Magic and IBM i Performance Tools coordination Enhance ESS cache algorithms for iSeries benefits all OSrsquos SAN on iSeries Redbook - 1 Basic 1Gb Fiber connectivity

IBM i Storage Management

Many computer systems require you to take responsibility for how information is stored and retrieved from the disk units along with providing the management environment to balance disk utilization enable disk protection and maintain balanced data spread for optimum performance

When you create a new file in a UNIX system you must tell the system where to put the file and how big to make it You must balance files across different disk units to provide good system performance If you discover later that a file needs to be larger you need to copy it to a location on disk that has enough space for the new larger file You may need to move files between disk units to maintain system performance

- 3 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IBM i server is different in that it takes responsibility for managing the information in auxiliary storage pools (also called disk pools or ASPs)

When you create a file you estimate how many records it should have You do not assign it to a storage location instead the system places the file in the best location that ensures the best performance In fact it normally spreads the data in the file across multiple disk units When you add more records to the file the system automatically assigns additional space on one or more disk units Therefore it makes sense to use disk copy functions to operate on either the entire disk space or the IASP

IBM i uses a single level storage object orientated architecture It sees all disk space and the main memory as one storage area and uses the same set of virtual addresses to cover both main memory and disk space Paging of the objests in this virtual address space is performed in 4 KB pages However data is usually blocked and transferred to storage devices in bigger than 4 KB blocks Blocking of transferred data is based on many factors for example expert cache usage

DS8000 offers a comprehensive storage solution for IBM i as long as the guidelines are followed

Virtual IO Server (VIOS) Support

A new way of conecteting the DS8000 Family of products is via VIOS The Virtual IO Server is part of the IBM PowerVM editions hardware feature on IBM Power Systems The Virtual IO Server technology facilitates the consolidation of

- 4 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

network and disk IO resources and minimizes the number of required physical adapters in the IBM Power Systems server It is a special-purpose partition which provides virtual IO resources to its client partitions The Virtual IO Server actually owns the physical resources that are shared with clients A physical adapter assigned to the VIOS partition can be used by one or more other partitions

The Virtual IO Server can provide virtualized storage devices storage adapters and network adapters to client partitions running an AIX IBM i or Linux operating environment The core IO virtualization capabilities of the Virtual IO server are shown below_ Virtual SCSI_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

The storage virtualization capabilities by PowerVM and the Virtual IO Server are supported by the DS8000 series using DS8000 LUNs as VSCSI backing devices in the Virtual IO Server It is also possible to attach DS8000 LUNs directly to the client LPARS using virtual fibre channel adapters via NPIV

Further information in DS8000 Host attachment is found in the following redbook httpwwwredbooksibmcomredpiecespdfssg248887pdf

For more information in implementing VIOS with DS8000 please look in this redbookhttpwww-03ibmcomsystemsresourcessystems_i_Virtualization_Open_Storagepdf

Sizing for performance

Itrsquos important to size a storage subsystem based on IO activity rather than capacity requirements alone This is particularly true of an IBM i environment because of the sensitivity to IO performance IBM has excellent tools for modeling the expected

- 5 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

performance of your workload and configuration We provide some guidelines and general words of wisdom in this paper however these provide a starting point only for sizing with the appropriate tools

It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling advanced Copy Services functions such as Point in Time (PiT) Copy (also known as FlashCopy) or PPRC (Global Mirror and Metro Mirror) particularly if you are planning to enable Metro Mirror

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror

You will need to collect IBM i performance data Generally you will collect performance data for a weeks worth of performance for each systemlpar and send the resulting reports

Each set of reports should include print files for the following10487141048714System Report - Disk Utilization Required10487141048714Component Report - Disk Activity Required10487141048714Resource Interval Report - Disk Utilization Detail Required10487141048714System Report - Storage Pool Utilization Required

Send the report print files as indicated below (send reports as txt file format type) If you are collecting from more than one IBM i or LPAR the reports need to be for the same time period for each systemlpar if possible

DS8000 HBA

DS8800 Host Attachment

The DS8800 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged now with two options a 4-port 8 Gb per second card or an 8-port 8 Gb per second card With either the 8 port or the 4 port card the connector type is the same a Lucent Connector (LC) Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8800 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are 3153 - 8 Gb per second 4-port SW FCPFICON PCIe Adapter3157 - 8 Gb per second 8-port SW FCPFICON PCIe Adapter 3253 - 8 Gb per second 4-port LW FCPFICON PCIe Adapter 3257 - 8 Gb per second 8-port LW FCPFICON PCIe Adapter

- 6 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 3: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

DS8800 Support Power HA-ACS integration with TPC-R IBM i-based hot data ASP balance and DB2 Media Placement exploitation

of DS8000 SSD drives Easy Tier support Power HA NPIV support for DS8000 Full support of VIOS and Power HA PowerHA + DS8000 copy services end-to-end integration Common Smart-IOA fiber for disk and tape New 4Gb 8Gb Smart fiber IO Adapter (IOPless) 61 (++ performance) Tagged Command Queuing and Header Strip Merge (+++ performance) Common one-stop POWERDS8000 support from Supportline experts 4Gb fiber Advanced lsquoDS8000 on IBM irsquo education series SAN on iSeries Redbook - 3 BRMS native support for FlashCopy environment Boot from San (2847 IOA) DSCLI - DS Command Line Interface ndash provides IBM i control Additional i5OS LUN sizes for increased flexibility SAN on iSeries Redbook ndash 2 i5OS Multipath fiber 2Gb fiber iSeries Copy Services for IBM DS8000 (aka Toolkit) Disk Magic and IBM i Performance Tools coordination Enhance ESS cache algorithms for iSeries benefits all OSrsquos SAN on iSeries Redbook - 1 Basic 1Gb Fiber connectivity

IBM i Storage Management

Many computer systems require you to take responsibility for how information is stored and retrieved from the disk units along with providing the management environment to balance disk utilization enable disk protection and maintain balanced data spread for optimum performance

When you create a new file in a UNIX system you must tell the system where to put the file and how big to make it You must balance files across different disk units to provide good system performance If you discover later that a file needs to be larger you need to copy it to a location on disk that has enough space for the new larger file You may need to move files between disk units to maintain system performance

- 3 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IBM i server is different in that it takes responsibility for managing the information in auxiliary storage pools (also called disk pools or ASPs)

When you create a file you estimate how many records it should have You do not assign it to a storage location instead the system places the file in the best location that ensures the best performance In fact it normally spreads the data in the file across multiple disk units When you add more records to the file the system automatically assigns additional space on one or more disk units Therefore it makes sense to use disk copy functions to operate on either the entire disk space or the IASP

IBM i uses a single level storage object orientated architecture It sees all disk space and the main memory as one storage area and uses the same set of virtual addresses to cover both main memory and disk space Paging of the objests in this virtual address space is performed in 4 KB pages However data is usually blocked and transferred to storage devices in bigger than 4 KB blocks Blocking of transferred data is based on many factors for example expert cache usage

DS8000 offers a comprehensive storage solution for IBM i as long as the guidelines are followed

Virtual IO Server (VIOS) Support

A new way of conecteting the DS8000 Family of products is via VIOS The Virtual IO Server is part of the IBM PowerVM editions hardware feature on IBM Power Systems The Virtual IO Server technology facilitates the consolidation of

- 4 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

network and disk IO resources and minimizes the number of required physical adapters in the IBM Power Systems server It is a special-purpose partition which provides virtual IO resources to its client partitions The Virtual IO Server actually owns the physical resources that are shared with clients A physical adapter assigned to the VIOS partition can be used by one or more other partitions

The Virtual IO Server can provide virtualized storage devices storage adapters and network adapters to client partitions running an AIX IBM i or Linux operating environment The core IO virtualization capabilities of the Virtual IO server are shown below_ Virtual SCSI_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

The storage virtualization capabilities by PowerVM and the Virtual IO Server are supported by the DS8000 series using DS8000 LUNs as VSCSI backing devices in the Virtual IO Server It is also possible to attach DS8000 LUNs directly to the client LPARS using virtual fibre channel adapters via NPIV

Further information in DS8000 Host attachment is found in the following redbook httpwwwredbooksibmcomredpiecespdfssg248887pdf

For more information in implementing VIOS with DS8000 please look in this redbookhttpwww-03ibmcomsystemsresourcessystems_i_Virtualization_Open_Storagepdf

Sizing for performance

Itrsquos important to size a storage subsystem based on IO activity rather than capacity requirements alone This is particularly true of an IBM i environment because of the sensitivity to IO performance IBM has excellent tools for modeling the expected

- 5 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

performance of your workload and configuration We provide some guidelines and general words of wisdom in this paper however these provide a starting point only for sizing with the appropriate tools

It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling advanced Copy Services functions such as Point in Time (PiT) Copy (also known as FlashCopy) or PPRC (Global Mirror and Metro Mirror) particularly if you are planning to enable Metro Mirror

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror

You will need to collect IBM i performance data Generally you will collect performance data for a weeks worth of performance for each systemlpar and send the resulting reports

Each set of reports should include print files for the following10487141048714System Report - Disk Utilization Required10487141048714Component Report - Disk Activity Required10487141048714Resource Interval Report - Disk Utilization Detail Required10487141048714System Report - Storage Pool Utilization Required

Send the report print files as indicated below (send reports as txt file format type) If you are collecting from more than one IBM i or LPAR the reports need to be for the same time period for each systemlpar if possible

DS8000 HBA

DS8800 Host Attachment

The DS8800 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged now with two options a 4-port 8 Gb per second card or an 8-port 8 Gb per second card With either the 8 port or the 4 port card the connector type is the same a Lucent Connector (LC) Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8800 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are 3153 - 8 Gb per second 4-port SW FCPFICON PCIe Adapter3157 - 8 Gb per second 8-port SW FCPFICON PCIe Adapter 3253 - 8 Gb per second 4-port LW FCPFICON PCIe Adapter 3257 - 8 Gb per second 8-port LW FCPFICON PCIe Adapter

- 6 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 4: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

The IBM i server is different in that it takes responsibility for managing the information in auxiliary storage pools (also called disk pools or ASPs)

When you create a file you estimate how many records it should have You do not assign it to a storage location instead the system places the file in the best location that ensures the best performance In fact it normally spreads the data in the file across multiple disk units When you add more records to the file the system automatically assigns additional space on one or more disk units Therefore it makes sense to use disk copy functions to operate on either the entire disk space or the IASP

IBM i uses a single level storage object orientated architecture It sees all disk space and the main memory as one storage area and uses the same set of virtual addresses to cover both main memory and disk space Paging of the objests in this virtual address space is performed in 4 KB pages However data is usually blocked and transferred to storage devices in bigger than 4 KB blocks Blocking of transferred data is based on many factors for example expert cache usage

DS8000 offers a comprehensive storage solution for IBM i as long as the guidelines are followed

Virtual IO Server (VIOS) Support

A new way of conecteting the DS8000 Family of products is via VIOS The Virtual IO Server is part of the IBM PowerVM editions hardware feature on IBM Power Systems The Virtual IO Server technology facilitates the consolidation of

- 4 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

network and disk IO resources and minimizes the number of required physical adapters in the IBM Power Systems server It is a special-purpose partition which provides virtual IO resources to its client partitions The Virtual IO Server actually owns the physical resources that are shared with clients A physical adapter assigned to the VIOS partition can be used by one or more other partitions

The Virtual IO Server can provide virtualized storage devices storage adapters and network adapters to client partitions running an AIX IBM i or Linux operating environment The core IO virtualization capabilities of the Virtual IO server are shown below_ Virtual SCSI_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

The storage virtualization capabilities by PowerVM and the Virtual IO Server are supported by the DS8000 series using DS8000 LUNs as VSCSI backing devices in the Virtual IO Server It is also possible to attach DS8000 LUNs directly to the client LPARS using virtual fibre channel adapters via NPIV

Further information in DS8000 Host attachment is found in the following redbook httpwwwredbooksibmcomredpiecespdfssg248887pdf

For more information in implementing VIOS with DS8000 please look in this redbookhttpwww-03ibmcomsystemsresourcessystems_i_Virtualization_Open_Storagepdf

Sizing for performance

Itrsquos important to size a storage subsystem based on IO activity rather than capacity requirements alone This is particularly true of an IBM i environment because of the sensitivity to IO performance IBM has excellent tools for modeling the expected

- 5 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

performance of your workload and configuration We provide some guidelines and general words of wisdom in this paper however these provide a starting point only for sizing with the appropriate tools

It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling advanced Copy Services functions such as Point in Time (PiT) Copy (also known as FlashCopy) or PPRC (Global Mirror and Metro Mirror) particularly if you are planning to enable Metro Mirror

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror

You will need to collect IBM i performance data Generally you will collect performance data for a weeks worth of performance for each systemlpar and send the resulting reports

Each set of reports should include print files for the following10487141048714System Report - Disk Utilization Required10487141048714Component Report - Disk Activity Required10487141048714Resource Interval Report - Disk Utilization Detail Required10487141048714System Report - Storage Pool Utilization Required

Send the report print files as indicated below (send reports as txt file format type) If you are collecting from more than one IBM i or LPAR the reports need to be for the same time period for each systemlpar if possible

DS8000 HBA

DS8800 Host Attachment

The DS8800 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged now with two options a 4-port 8 Gb per second card or an 8-port 8 Gb per second card With either the 8 port or the 4 port card the connector type is the same a Lucent Connector (LC) Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8800 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are 3153 - 8 Gb per second 4-port SW FCPFICON PCIe Adapter3157 - 8 Gb per second 8-port SW FCPFICON PCIe Adapter 3253 - 8 Gb per second 4-port LW FCPFICON PCIe Adapter 3257 - 8 Gb per second 8-port LW FCPFICON PCIe Adapter

- 6 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 5: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

network and disk IO resources and minimizes the number of required physical adapters in the IBM Power Systems server It is a special-purpose partition which provides virtual IO resources to its client partitions The Virtual IO Server actually owns the physical resources that are shared with clients A physical adapter assigned to the VIOS partition can be used by one or more other partitions

The Virtual IO Server can provide virtualized storage devices storage adapters and network adapters to client partitions running an AIX IBM i or Linux operating environment The core IO virtualization capabilities of the Virtual IO server are shown below_ Virtual SCSI_ Virtual Fibre Channel using NPIV (Node Port ID Virtualization)_ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

The storage virtualization capabilities by PowerVM and the Virtual IO Server are supported by the DS8000 series using DS8000 LUNs as VSCSI backing devices in the Virtual IO Server It is also possible to attach DS8000 LUNs directly to the client LPARS using virtual fibre channel adapters via NPIV

Further information in DS8000 Host attachment is found in the following redbook httpwwwredbooksibmcomredpiecespdfssg248887pdf

For more information in implementing VIOS with DS8000 please look in this redbookhttpwww-03ibmcomsystemsresourcessystems_i_Virtualization_Open_Storagepdf

Sizing for performance

Itrsquos important to size a storage subsystem based on IO activity rather than capacity requirements alone This is particularly true of an IBM i environment because of the sensitivity to IO performance IBM has excellent tools for modeling the expected

- 5 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

performance of your workload and configuration We provide some guidelines and general words of wisdom in this paper however these provide a starting point only for sizing with the appropriate tools

It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling advanced Copy Services functions such as Point in Time (PiT) Copy (also known as FlashCopy) or PPRC (Global Mirror and Metro Mirror) particularly if you are planning to enable Metro Mirror

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror

You will need to collect IBM i performance data Generally you will collect performance data for a weeks worth of performance for each systemlpar and send the resulting reports

Each set of reports should include print files for the following10487141048714System Report - Disk Utilization Required10487141048714Component Report - Disk Activity Required10487141048714Resource Interval Report - Disk Utilization Detail Required10487141048714System Report - Storage Pool Utilization Required

Send the report print files as indicated below (send reports as txt file format type) If you are collecting from more than one IBM i or LPAR the reports need to be for the same time period for each systemlpar if possible

DS8000 HBA

DS8800 Host Attachment

The DS8800 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged now with two options a 4-port 8 Gb per second card or an 8-port 8 Gb per second card With either the 8 port or the 4 port card the connector type is the same a Lucent Connector (LC) Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8800 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are 3153 - 8 Gb per second 4-port SW FCPFICON PCIe Adapter3157 - 8 Gb per second 8-port SW FCPFICON PCIe Adapter 3253 - 8 Gb per second 4-port LW FCPFICON PCIe Adapter 3257 - 8 Gb per second 8-port LW FCPFICON PCIe Adapter

- 6 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 6: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

performance of your workload and configuration We provide some guidelines and general words of wisdom in this paper however these provide a starting point only for sizing with the appropriate tools

It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling advanced Copy Services functions such as Point in Time (PiT) Copy (also known as FlashCopy) or PPRC (Global Mirror and Metro Mirror) particularly if you are planning to enable Metro Mirror

A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror

You will need to collect IBM i performance data Generally you will collect performance data for a weeks worth of performance for each systemlpar and send the resulting reports

Each set of reports should include print files for the following10487141048714System Report - Disk Utilization Required10487141048714Component Report - Disk Activity Required10487141048714Resource Interval Report - Disk Utilization Detail Required10487141048714System Report - Storage Pool Utilization Required

Send the report print files as indicated below (send reports as txt file format type) If you are collecting from more than one IBM i or LPAR the reports need to be for the same time period for each systemlpar if possible

DS8000 HBA

DS8800 Host Attachment

The DS8800 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged now with two options a 4-port 8 Gb per second card or an 8-port 8 Gb per second card With either the 8 port or the 4 port card the connector type is the same a Lucent Connector (LC) Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8800 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are 3153 - 8 Gb per second 4-port SW FCPFICON PCIe Adapter3157 - 8 Gb per second 8-port SW FCPFICON PCIe Adapter 3253 - 8 Gb per second 4-port LW FCPFICON PCIe Adapter 3257 - 8 Gb per second 8-port LW FCPFICON PCIe Adapter

- 6 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 7: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

For maximum performance you should consider using 4-port cards for maximum connectivity you should use he 8-port cards

DS8700 Host Attachment

The DS8700 supports 2 basic types of Host Connection features Short Wave FCPFICON and Long Wave FCPFICON The FCPFICON host attachment features come packaged as a four 4Gb Lucent Connector (LC) port card adapter Each port of the FCPFICON Host adapter can be configured as FCP or FICON as needed (but a single port cannot operate as both FCP and FICON simultaneously) All DS8700 Host Attachment features are designed to connect into a PCI-E slot in the provided IO drawers

FCPFICON Feature codes are3143 - 4 port shortwave 4 Gb FCPFICON3243 - 4 port longwave 4 Gb FCPFICON3245 - 4 port longwave 4 Gb FCPFICON (10 km)3153 - 4 port shortwave 8 Gb FCPFICON (maximum of 8 per base frame and 8 per first expansion frame)3253 - 4 port long wave 8 Gb FCPFICON (10 km) (maximum of 8 per baseframe and 8 per first expansion frame)

Note The 8 Gbsecond FCPFICON cards require installation in specific slotsin the IO enclosures and therefore are limited to a maximum of 8 features inthe base frame and an additional 8 features in the first expansion frame

DS8800 and DS8700 Host Attachment Host Port and Installation Sequence Guide and best practices

This guide is updated from time to time The latest version is here

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexTD105671

Best practices guidelines

Isolate host connections from remote copy connections (MM GM GC and MGM) on a host adapter basis

Isolate zSeries and other host connections from IBM i host connections on a host port basis

Always have symmetric pathing by connection type (ie use the same number of paths on all host adapters used by each connection type)

Size the number of host adapters needed based on expected aggregate maximum bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload

Sharing different connection types within an IO enclosure is encouraged

- 7 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 8: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

When possible isolate asynchronous from synchronous copy connections on a host adapter basis

When utilizing multipathing try to zone ports from different IO enclosures to provide redundancy and balance (ie include port from a host adapter in enclosure 0 and enclosure 1)

DS8300 and DS6000 Host Attachment

Contact IBM if you need this information

DS8000 Solid State Drives (SSD)

Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD We have just begun to explore the promising future of this technology Solid-state storage means using a memory-type device for mass storage rather than spinning disk or tape First-to-market devices are the shape of standard hard disks so they plug easily into existing disk systems

IBM is making solid-state storage affordable with innovative architectures system and application integration and management tools that enable effective use of solid-state storage Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market

Solid-state storage technology can have the following benefits Significantly improved performance for hard-to-tune IO bound applications No

code changes required Reduced floor space Can be filled near 100 without performance degradation Faster IOPS Faster access times Reduced energy use

DS8000 offers IBM i SSD Integration Choices You can either exploit the tools within the IBM i OS or use the DS8000 Easy Tier function that is independent of server

IBM i SSD tools and automation for internal and DS8000ndash Manually

bull Create all SSD User ASP or Independent ASP bull Manually place data onto User ASP or IASP

ndash DB2 Media Preferencebull User controls what media type database files should be stored on bull DB files known to be IO performance critical can explicitly be

placed on high performing SSDs

- 8 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 9: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

bull Dynamic changes to media preference supported which enables dynamic data movement

ndash ASP Balancer bull Based read IO count statistics for each 1 MB extent bull Migrates ldquohotrdquo extents from HDDs to SSDs and ldquocoldrdquo extents

from SSDs to HDDsndash UDFS Media Preference (new with IBM i 71)

bull New lsquoUnitrsquo parameter on CRTUDFS command DS8000 (agnostic to IBM i and DB2)

ndash IBM System Storage EasyTier works with IBM ibull DS8000 software to locate and migrate hot data

onto DS8000 SSDs ndash any IBM i versionbull Advantages of Using Easy Tier Automatic Mode

Designed to be Easy The user is not required to make a lot of decisions

or go through an extensive implementation process Efficient Use of SSD Capacity

Easy Tier moves 1 gigabyte data extents between storage tiers

Intelligence Easy Tier learns about the workload over a period

of time (24 hours) As workload patterns change Easy Tier finds any

new highly active (ldquohotrdquo) extents and exchanges them with extents residing on SSDs that may have ldquocooled offrdquo

Negligible Performance Impact Easy Tier moves data gradually to avoid contention

with host IO activity The overhead associated with Easy Tier

management is nearly undetectable No need for storage administrators to worry about

scheduling when migrations occur

There are benefits of implementing Easy Tier even if you plan to manage SSDs from the IBM i OS These benefits are documented in this white paper

httpwww-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10755

DS8000 Logical Configuration

The following chart shows the logical configuration constructs for the DS8000

- 9 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 10: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

The numbering of the ranks depends on the order in which the ranks were created and in turn on the order in which the RAID arrays were created on the array sites The GUI interface to configure the DS8000 can automatically create RAID arrays and ranks in the same step and balances the creation of RAID arrays across DA pairs We recommend that if you use the DSCLI to configure the DS8000 that you also attempt to create RAID ranks on the array sites in a consistent manner this creates a configuration that is more easily managed when you come to create the extent pools and balance the extent pools between the servers and the DA pairs

The serial number reported by the DS6000 or DS8000 to the IBM i contain the LUN number in the example below the serial number of disk unit DD019 is 30-1001000 which is LUN 01 in LSS 10 In the DS6000 or DS8000 this is reported as LUN 1001

- 10 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 11: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

RAID Arrays

The DS8000 allows the choice of RAID 5 RAID 10 and RAID 6 RAID 6 is supported on DS8000 at LIC levels R4 and above and DA feature codes 3041 3051 and 3061

Disk Magic allows you to model different RAID options to find the configuration that best fits your performance and availability requirements If you are considering a configuration of lsquoshort-strokedrsquo RAID 5 arrays (under-using the capacity of a RAID 5 array for performance reasons) we recommend that you consider RAID 10 The benefit in this case is that there is no requirement to lsquofencersquo spare capacity to maintain performance

Extent Pools

The Extent Pool construct provides flexibility in data placement as well as ease of management To optimize performance it is important to balance workload activity across Extent Pools assigned to server0 and Extent Pools assigned to server1 Typically this means assigning an equal number of Ranks to Extent Pools assigned to server0 and Extent Pools assigned to server1 and an equal amount of workload activity to all Ranks

We recommend assigning even numbered ranks to even numbered extent pools this ensures balance between server 0 and server 1 This requires that you configured the ranks in order on the array sites array site 1 becomes array 1 and rank 1 and so on

The use of multi-rank extent pools allows you to define LUNs larger than the size of a single rank At DS8000 licensed internal code levels prior to Licensed Internal Code

- 11 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 12: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

(LIC) Release 3 any LUN will not be lsquostripedrsquo across all the ranks in the extent pool ndash the only time a LUN will span multiple ranks is when it did not fit in the original rank Therefore a LUN will usually use no more than 6 or 7 disk arms When allocating multiple LUNs into a multi-rank extent pool the LUNs will be allocated on the rank with the most available freespace ndash this results in a lsquoround robinrsquo style of allocation and will allocate LUNs onto the ranks in a roughly even fashion assuming that the LUNs and ranks are the same size

DS8000 Licensed Internal Code (LIC) Release 3 introduced a new allocation algorithm Storage Pool Striping (SPS) This allows a finer granularity striping across all the ranks in an extent pool and provides substantial performance benefits for some workloads

For IBM i attached subsystems we recommend using multi-rank extent pools in combination with storage pool striping (rotate extents) Dedicating ranks or extent pools to a single workload will provide more predicable performance but may cost more in terms of disk capacity to provide the desired level of performance

Defining multiple ranks in an Extent Pool also provides efficiency in usable space You can use Capacity Magic to estimate the usable capacity on the ranks for your chosen LUN size For IBM i workloads if you have a requirement to isolate workloads you will need to define two Extent Pools (one for each server) for each workload

Whichever configuration option you prefer discuss it with your IBM representative or Business Partner as our performance modeling tool Disk Magic needs to accurately reflect the configuration that you are planning If you model a solution where all the disks are shared with all the workloads then decide to isolate workloads you may need more disks to achieve the same performance levels

Data layout

Selecting an appropriate data layout strategy depends on your primary objectives

Spreading workloads across all components maximizes the utilization of the hardware components This includes spreading workloads across all the available Host Adapters and Ranks However it is always possible when sharing resources that performance problems may arise due to contention on these resources

To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads

The greater the granularity of the resource the more it can be shared For example there is only one cache per processor complex so its use must be shared although DS8000 intelligent cache management prevents one workload from dominating the cache In

- 12 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 13: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

contrast there are frequently hundreds of DDMs so workloads can easily be isolated on different DDMs

To spread a workload across ranks you need to balance IOs for any workload across all the available ranks SPS will achieve this when you use multi-rank extent pools

Isolation of workloads is most easily accomplished where each ASP or LPAR has itrsquos own extent pool pair This ensures that you can place data where you intend IO activity should be balanced between the two servers or controllers on the DS8000 This is achieved by balancing between odd and even extent pools and making sure that the number of ranks is balanced between odd and even extent pools

Make sure that you isolate critical workloads ndash We strongly recommend only IBM i LUNs on any rank (rather than mixed with non-IBM i) This is for performance management reasons - it will not provide improved performance If you mix production and development workloads on ranks make sure that the customer understands which actions may impact production performance for example adding LUNs to ASPs

iASPs

When designing a DS8000 layout for an iASP configuration you have the option to model the iASP LUNs on isolated Extent Pools You may achieve more cost-effective performance by putting iASP and SYSBAS LUNs onto the same shared ranks and extent pools for example when the SYSBAS activity would drive a requirement for more disks to maintain performance if the ranks were dedicated to SYSBAS Remember when using Disk Magic to model your iASP configuration that you may need smaller LUNs for the SYSBAS requirement

LUN Size

LUNs must be defined in specific sizes that emulate IBM i devices The size of the LUNs defined is typically related to the wait time component of the response time If there are insufficient LUNs wait time typically increases The sizing process determines the correct number of LUNs required to address the required capacity while meeting performance objectives

The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA For example If you have 64 LUNs and would like 2 paths to each LUN this will require 4 IOAs on releases prior to IBM i 61 for addressability

Disk Magic can be used to model the number of LUNs required Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size

- 13 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 14: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

selected and the number of spares assigned The IBM tool Capacity Magic can be used to verify capacity and space utilization plans

IBM i 61 and higher enables the use of larger LUNs while maintaining performance You can typically start modeling based on a 70GB LUN size A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the DS8000 Remember that in an iASP environment you may exploit larger LUNs in the iASPs but SYSBAS may require more smaller LUNs to maintain performance

Multipath

Multipath provides greater resiliency for SAN attached storage With the combination of RAID5 RAID6 or RAID10 protection DS8000 multipath provides protection of the data paths and the data itself without the requirement of additional LUNs However additional IO adapters and changes to the SAN fabric configuration may be required

The IBM i supports up to 8 paths to each LUN In addition to the availability considerations lab performance testing has shown that 2 or 3 paths provide performance improvements when compared to a single path Typically 2 paths to a LUN is the ideal balance of price and performance The Disk Magic tool supports multipathing over 2 paths

You might want to consider more than 2 paths for workloads where there is high wait time or where high IO rates are expected to LUNs for example SSD-backed LUNs

It is important to plan for multipath so that the two or more paths to the same set of LUNs use different elements of connection such as DS6000 and DS8000 host adapters SAN switches IBM i IO towers and HSL loops Good planning for multipath includes

Connections to the same set of LUNs via different DS host cards in different IO enclosures on DS8000

Connections to the same set of LUNs via different SAN switches The IOPIOA adapter pairs in the IBM i IO tower which connects to the same set

of LUNs should ideally be in different expansion towers which are located on different HSL or 12x loops wherever possible

When an IBM i system IPLs it discovers all paths to the disk The first path discovered will be the preferred path for IO If multiple IBM i LPARs are sharing the same DS8000 or DS6000 Host Adapters each system may discover the same initial path to the disk To avoid contention on SAN switch and HA ports and it is essential that you implement LUN masking in the SAN ndash specify a different range or ports and HAs for each LPAR to ensure that activity is balanced across all available paths You can do LUN masking either in the DS8000 using the volume groups construct or you can explicitly map IBM i IOAs to DS8000 HAs in the SAN switch

- 14 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 15: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

When assigning LUNs to HAs the recommendation is to isolate IBM i LUNs on their own HA where possible We recommend that you spread activity across the available HAs but since there is typically little skew in a workload this is usually not difficult We donrsquot allow multiple IBM i HBAs to see multiple HAs because in this case all IBM i HAs could establish paths through the same HBA which will result in unbalanced IO traffic between the DS8000 Host Adapters

Host Attachments

IBM i IO adapters should be defined as FC-AL connections for direct attached connections (without a switch) For all switched connections use SCSI-fcp which is the default for DS8000

In a pre-IBM i 61 multipath environment it is unlikely that any single host attachment is the bottleneck in an IBM i DS8000 configuration It is common to combine multiple IBM i FC attachments onto a single DS8000 FC attachment through a SAN In this case it is important to ensure that you do not over configure the DS8000 attachment Disk Magic can be used to model the DS8000 attachment The normal range is to combine 2-4 IBM i IOAs onto a single DS8000 host attachment

IBM i fiber adapters should be placed in accordance with the card placement guidelines that can be found in the Redpaper ldquoPCI PCI-X PCI-X DDR and PCIe Placement Rules for IBM IBM i Modelsrdquo available for download from httpwwwredbooksibm=comabstractsredp4011htmlOpen A previous and obsolete version of this paper is available for releases of code prior to V5R2 ldquoPCI Card Placement Rules for the IBM eServer IBM i Server OS400 Version 5 Release 2 September 2003rdquo located on the web at httpwwwredbooksibmcomabstractsredp3638htmlOpen

You are encouraged to consider these additional guidelines

For 0588 5088 5094 5096 5294 and 5296 style IO towers it is recommended to install no more than 1 fibre adapter per multi-adapter bridge (MAB) whether the attachment is for disk or tape If the MAB is dedicated for disk attachment then 2 adapters may be considered

Balance the fibre adapters evenly on the HSL and 12X loops Always place both the IOP and IOA in 64-bit card slots

If you are not using multipath you may find optimum performance is achieved by limiting the LUNs on each IBM i FC card to 20-22

When spreading activity across the host attachments you need to make sure that in a multi-path configuration alternate paths are provided to each server of the DS8000

- 15 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 16: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

Multipath connectivity is provided starting with i5OS V5R3 and is recommended when connecting to the DS8000 for availability and concurrent maintenance

Connections on the IBM i should be to the 2787 or 5760 cards ( 5760 cards require V5R3 or higher and they required an 800810825870890 or i5 CPU) We recommend using the first two 2787 cards in each node in a tower and ensuring that cards are balanced across all towers and HSL rings

IOPless Host Attachments

IBM i 61 together with Power 6 and the new Smart IOAs introduces the IOP-less IOAs for disk and tape The Smart IOAs are

ndash 5735 ndash 2 Port 8Gb Fibre Smart IOA PCIendash 5749 ndash 2 Port 4Gb Fibre Smart IOA PCI-x DDR2 IBM i OS onlyndash 5774 ndash 2 Port 4Gb Fibre Smart-IOA PCIe i OS LINUX and AIX

These new cards provide significant performance advantages and do not require IOPs thus saving costs and slots The maximum number of supported addresses is increased from 32 to 64 for each port Typically you would configure 2 paths to each LUN for availability

IBM i 61 or higher with POWER 6 of POWER 7 and the IOPless adapters can support up to 64 LUNs on each port however with the move to configure larger LUNs for most workloads you should limit the total LUNs on a card to 64 (32 on each port) For workloads with a low IO rate you may be able to support more than 32 LUNs on each port

For IBM i 61 or higher with POWER 6 of POWER 7 configurations and the new IOP-less IOAs you should plan on a 11 ratio between the IBM i IOA ports and the DS8000 IO ports For the highest performance configurations where host attachments are stressed (not likely in an IBM i production workload) you should plan to use only 2 ports of the DS8000 4Gb 4-port HA card The DS8700 and DS8800 have both 4-port and 8-port 8Gb Host Adapter cards For maximum performance select the 4-port card for increased connectivity select the 8-port card The increased performance capabilities of these new cards can be modeled using Disk Magic

Card placement guidelines are as follows

bull 6 Smart IOAs per 12X loopbull 4 Smart IOAs per HSL-2 loop

- 16 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 17: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

The IOPless adapters support DS8000 and DS6000 (not ESS) The same adapters may be used for disk and tape as well as Boot from SAN however it is strongly recommended not to share IOAs between disk and tape

Maximum resource names in IBM i

The System i Handbook and the System Builder documents the maximum quantity of internal disk and external LUNs For some of these maximum limits this isnt really the maximum quantity of LUNs as we normally think of them but is the maximum quantity that will be seen at the hardware level For example

Fibre card 1 will have the 1st path to 32 LUNsFibre card 2 will have the 2nd path to the same 32 LUNsIBM i microcode and operating systems will see this as 64 resource names and this 64 count is what you use when trying to determine whether youre approaching the maximum supported number for the i5 model

Logical Configuration

When defining IBM i LUNs you can define them as protected unless you want to use i5OS mirroring in which case you need to ensure that you define the LUNs as unprotected when you create the LUNs If you are using Boot from San at a release prior to i61 you will have to use mirroring to protect the Load Source in this case you must define the Load Source LUNs as unprotected

If you are planning to use copy services either for high availability or just for migrations it is important to note that the source and target in a copy services pair must have the same attributes Once the LUN is defined with the protection attribute this cannot be changed without deleting the LUN and re-defining Deleting a LUN deletes all the data on the LUN

A host adapter port is identified to DS through its World Wide Port Name (WWPN) A set of host ports can be associated to a port group and managed together This port group is referred to as Host attachment within the GUI and is referred to as Host connection in DSC CLI A host attachment can be associated with a volume group to define which LUNs will be assigned to that host adapter

A volume group is a group of logical volumes (LUNs) that are attached to a host adapter The type of volume groups used with open system hosts determines how the logical volume number is converted to the host-addressable LUN_ID on the Fibre Channel SCSI interface

- 17 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 18: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

When associating a host attachment to the volume group the host attachment containsattributes that define the logical blocksize and the address discovery method that is used by that host adapter These attributes must be consistent with the type of volume group that is assigned to that host attachment i5OS LUNs are accessed by the host in 520 byte blocks Of these 520 bytes 8 are used by the operating system so in each block there are 512 usable bytes like in LUNs for other open systems So for an i5OS host attachment 520 is the correct blocksize to define The correct address discovery method is Report LUN The correct blocksize and address discovery for IBM i is generated by specifying IBM i attachments when creating volume groups and host attachments

The LUN id on the DS8000 is a combination of the LSS number and an incremented LUN number

Adding LUNs to ASP

Adding a LUN to an ASP generates IO activity on the rank as the LUN is formatted If there is production work sharing the same rank you may see a performance impact For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals

Boot from SAN

IOP 2847 plus the appropriate Fibre IOA allows you to place an i5os load source on a Fibre Channel attached ESS Model 800 DS6000 or DS8000 The LUNs are attached using features 2766 2787 or 5760 and apart from the Load Source you can have another 31 LUNs on the same attachment The Load Source device does not support multipath at V5R4 but other LUNs on the 2847 can be multipath devices To provide redundancy for the Load Source you can have 2 2847 and use i5OS mirroring (with the Load Source LUN definied as unprotected) Starting with IBM i 61 the LUN containing load source may participate in a multipath configuration

Software

It is essential that you ensure that you have all up to date software levels installed There are fixes that provide performance enhancements correct performance reporting and support for new functions As always call the support center before installation to verify that you are current with fixes for the hardware that you are installing It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed

When updating storage subsystem LIC it is also important to check whether there are any server software updates required Details of supported configurations and software levels are provided by the System Storage Interoperation Center httpwww-03ibmcomsystemssupportstorageconfigssicdisplayesssearchwithoutjswssstart_over=yes

- 18 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 19: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

Performance Monitoring

Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem IBM i Performance Tools reports provide information on IO rates and on response times to the server This allows you to track trends in increased workload and changes in response time You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly

IBM i 71 adds a new category to IBM i Collection Services EXTSTG provides new collection performance metrics from DS8000 This function requires DS8000 R4 or later firmware Data can be presented in graphs using iDoctor today and will be incorporated into Performance Data Investigator (PDI) in a future release

If you have multiple servers attached to a storage subsystem particularly if you have other platforms attached in addition to IBM i it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective

IBM TPC for Disk provides a comprehensive tool for managing the performance of DS8000s You should collect data from all attached storage subsystems in 15 minute intervals in the event of a performance problem IBM will ask for this data ndash without it resolution of any problem may be prolonged

Copy Services Considerations

Copy Services is an optional feature of the IBM Systems Storage DS8000 It brings powerful data copying and mirroring technologies to open systems environments previously available only for mainframe storage

The following Copy Services functions are available on the DS8000 and are fully supported on IBM i_ Metro Mirror (previously known as synchronous PPRC)_ Global Mirror (previously known as asynchronous PPRC)_ FlashCopy including Space Efficient Flash Copy (SEFL)

- 19 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 20: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

Customers may use Copy Services on the entire disk space or on individual IASPs Metro Mirror and Global Mirror provide business continuity and disaster recovery while FlashCopy helps to minimize the backup window on a production system

Space Efficient FlashCopy is designed for temporary copies Copy duration should generally not last longer than 24 hours unless the source and target volumes have little write activity FlashCopy SE is optimized for use cases where a small percentage of the source volume is updated during the life of the relationship If much more than 20 of the source is expected to change there may be tradeoffs in terms of performance versus space efficiency In this case standard FlashCopy may be considered as a good alternative Since background copy would update the entire target it would not make much sense and is not permitted with Flashcopy SE Likewise establishing a FlashCopy SE relationship just prior to running applications that make widespread changes to the source volumes (eg database reorgs formats full volume restores from tape etc) is not advisable For the latest recommendations on using FlashCopy SE please refer to this document IBMers httpw3-03ibmcomsupporttechdocsatsmastrnsfWebIndexFLASH10617Business Partners httppartnersboulderibmcomsrcatsmastrnsfWebIndexFLASH10617

Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of DS8000 Copy Services in an IBM i environment

- 20 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

FlashCopy

For backups and snapshots (copy no-copy options and space efficient FlashCopy)

Metro Mirror (Synchronous)

For local availability

Global Copy ( Extended Distance)

For data migration only

Peer

To

Peer

Rem

ote

Copy

(c

ontin

uous

cop

y)

Global Mirror (Asynchronous)

For DR

Consistency Group

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095

Page 21: IBM - Hints and Tips for implementing DS6000 and … · Web viewTitle Hints and Tips for implementing DS6000 and DS8000 in an iSeries environment Author IBM_USER Last modified by

Hints and tips for implementing DS8000 in a IBM i environment

httpwww-03ibmcomsystemsserviceslabservicesplatformslabservices_ihtml

Further References

For further detailed information on implementing DS6000 and DS8000 in an IBM i environment refer to the following redbooks These can be downloaded from wwwredbooksibmcom

IBM System Storage Copy Services on System i A Guide to Planning and Implementation (SG24-7103)

iSeries and IBM TotalStorage A guide to implementing external disk on IBM eServer i5 (SG24-7120)

High Availability on IBM i including PowerHA on i and 61 HA enhancements (SG24-7405)

Acknowledgements

Thanks to Nancy Roper and Jana Jamsek from the Advanced Technical Support Storage group Sue Baker and Eric Hess from the Advanced Technical Support Power Systems and Selwyn Dickey and Tim Klubertanz from STG Lab Services for their input to and review of this document

- 21 ndashCopyright IBM CorporationSeptember 29th 2011httpwww-03ibmcomsupporttechdocsatsmastrnsf Document TD103095