This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
� IBM i-based Virtualization– IBM i partition uses I/O resources from another IBM i
partition– Eliminates requirement to buy adapters and disk drives
for each IBM i partition– Supports simple creation of additional partitions …. e.g.,
for test and development– Requires POWER6 server with IBM i 6.1– Can mix virtual and direct I/O in client
� Platform support– Most IBM Power6 servers (except blade)
� Storage support– Determined by host IBM i partition ( EXP24, 12S, other
integrated disk and native attached external storage)
� LPAR management– HMC
IBM i
Hypervisor
IBM i
POWER6
* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
� Same technology as IBM i hosting AIX, LINUX, and iSCSI x86 servers
� Leverage existing hardware investment– Create new IBM i 6.1 LPARs using only virtual
hardware (No IOAs, IOPs, disk units, I/O slots necessary for client partitions), but may also use physical I/O.
� Rapidly deploy new workloads– Virtual disk created with 1 command or several
clicks in System i Navigator– New LPAR, virtual resources deployed
dynamically
� Create test environments without hardware provisioning
– Virtual resources allow new test environments of exact size to be created, deleted without moving hardware
– Test new applications, tools, fixes in virtual test LPAR– Test the next release in the client partition
IBM i
Hypervisor
IBM i
POWER6
* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
� Single client adapter per physical port per partition– Intended to avoid single point of failure– Documentation only – not enforced
� Maximum of 64 active client connections per physical port– It is possible to map more than 64 clients to a single adapter port– May be less due to other VIOS resource constraints
� 32K unique WWPN pairs per system platform– Removing adapter does not reclaim WWPNs
• Can be manually reclaimed through CLI (mksyscfg, chhwres…)• “virtual_fc_adapters” attribute
– If exhausted, need to purchase activation code for more
� Device Limitations– Maximum of 128 visible target ports
• Not all visible target ports will necessarily be active• Redundant paths to a single DS8000 node• Device level port configuration• Inactive target ports still require client adapter resources
– Maximum of 64 target devicesAny combination of disk and tapeTape libraries and tape drives are counted separately
� With VSCSI–All Logical replication solutions supported including iCluster–PowerHA for i - Geographic mirroring–PowerHA for i – Storwize V7000 Metro and Global Mirror support
(4Q2011)
� With NPIV–All Logical replication solutions supported including iCluster–PowerHA for i - Geographic mirroring–PowerHA for i – Storwize V7000 Metro and Global Mirror support
(4Q2011)
Plus:–DS8000 Metro Mirroring–DS8000 Global Mirroring–DS8000 Lun level switching
Three categories of storage attachment to IBM i through VIOS
1) Supported (IBM storage)- tested by IBM; IBM supports the solution and owns resolution -
IBM will deliver the fix
2) Tested / Recognized (3rd party storage including EMC and Hitachi)- IBM / storage vendor collaboration, solution was tested (by vendor, IBM, or both); - CSA in place, states that IBM and storage vendor will work together to resolve the issue - IBM or storage vendor will deliver the fix
3) Other- not tested by IBM, maybe not have been tested at all
No commitment / obligation to provide fix
Category #3 (Other) was introduced in the last few years, “other” storage used to invalidate the VIOS warranty. IBM Service has committed to provide some limited level of problem determination for service requests / issues involving "other” storage. To the extent that they will try to isolate it to being a problem within VIOS or IBM i, or external to VIOS or IBM i (ie. a storage problem). No guarantee that a fix will be provided, even if the problem was identified as a VIOS or IBM i issue
Notes- This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations- POWER7 servers require IBM i 6.1 or later- This table can change over time as addition hardware/software capabilities/options are added# DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS## DS3500 has either SAS or Fibre Channel connection. Ractk/Tower only uses Fibre Channel. Blades support either SAS or Fibre Channel (either BCS or BCH)### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support* Supported with Smart Fibre Channel adapters – NOT supported with IOP-based Fibre Channel adapters** NPIV requires Machine Code Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches@ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500@@ N Series can only be used as file server. No load source/boot support. Support only through IFS. No IBM i data base support% NPIV support for DS5100/DS5300 requires IBM i 7.1 TR2 and must have POWER7 firmware: Ax730_xxx or the POWER6 firmware Service Pack that will be released 2Q2011
VIOS
NPIV**
VIOS
NPIV**VIOSVIOSn/aVIOSVIOSVIOSVIOS
IFS
(NAS)
IBM i Attach
IBM i Version
Hardware
IBM i Attach
IBM i Version
Hardware
6.1 / 7.1
POWER6/7
IFS / NFS
(NAS)
IFS / NFS
(NAS)
5.4 / 6.1 / 7.1
POWER5/6/7
N Series@@
6.1 / 7.1
POWER6/7
(BCH)
VIOS
6.1 / 7.1
POWER6/7
SVC
6.1 / 7.1
POWER6/7
(BCH)
VIOS
6.1 / 7.1
POWER6/7
XIV
Not
supported
Direct
5.4 / 6.1
POWER5/6/7
Not 7.1 ###
POWER5/6/7
DS6800
6.1 / 7.1
POWER6/7
(BCH)
Direct or VIOS –VSCSI and
NPIV**
5.4 / 6.1 / 7.1
POWER5/6/7
DS8100
DS8300
6.1 / 7.1
POWER6/7
(BCH)
VIOS
6.1 / 7.1
POWER6/7
Storwize
V7000
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7
(BCH)
6.1 / 7.1
POWER6/7 @, #, ##Power
Blades
Rack / Tower
Systems
Table as of
April 5, 2011
Direct* or VIOS –VSCSI and
NPIV%
6.1 / 7.1
POWER6/7
DS5100DS5300
Direct or VIOS –VSCSI and
NPIV**
VIOSVIOS
5.4 / 6.1 / 7.1
POWER5/6/7
6.1 / 7.1
POWER6/7
6.1 / 7.1
POWER6/7
Not DS3200#,
Yes DS3500##
DS8700
DS8800
DS4700
DS4800
DS5020
DS3200
DS3400
DS3500
DS3950
For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information
� Supports over-commitment of logical memory with overflow going to a paging device
� Intelligently flow memory from one partition to another for increased utilization and flexibility
� Memory from a shared physical memory pool is dynamically allocated among logical partitions as needed to optimize overall memory usage
� Designed for partitions with variable memory requirements
� PowerVM Enterprise Edition on POWER6 and Power7 processor-based systems
– Partitions must use VIOS for I/O virtualization* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
POWER Server
VirtualI/O
Server
Paging
PowerVM Hypervisor AMS
DedicatedMemory
CPU
Shared Memory
Shared CPU
�Reduce memory costs by improving memory utilization on Power Servers
� Source and destination must be mobility capable and compatible.– Enhanced hardware virtualization capabilities.– Identical or compatible processors.– Compatible firmware levels.
� Source and destination must be LAN connected – same subnet.
� All resources (CPU, Memory, IO adapters) must be virtualized prior to migration.– Hypervisor will handle CPU and Memory
automatically, as required. Virtual IO adapters are pre-configured, and SAN-attached disks accessed through Virtual IO Server (VIOS)
� Source and destination VIOS must have symmetrical access to the partition’s disks.– e.g. no internal or VIOS LVM-based disks.
� OS is migration enabled/aware.– Certain tools/middleware can benefit from being
IBM i Restrictions� The logical partition must have all disks backed by physical volumes.
� The logical partition must not be assigned a virtua l SCSI optical or tape device or an NPIV attached tape device.
� The logical partition cannot be activated with a pa rtition profile which has a virtual SCSI server adapter: can not be hosting another par tition.
� The logical partition cannot be activated with a pa rtition profile which has a virtual SCSI client adapter that is hosted by another IBM i logical partition: can not be a hosted partition.
� No virtual SCSI server adapters can be dynamically added to the logical partition.
� No virtual SCSI client adapters that are hosted by another IBM i logical partition can be dynamically added to the logical partition being moved.
� The logical partition must not be an alternative er ror logging partition.– An alternative error logging partition is a target from the HMC for error logs.
� The logical partition cannot collect physical I/O s tatistics.
� The logical partition must not be a time reference partition.– Used to synchronize time between partitions
• The VIOS partitions will do this automatically as part of the migration
HMC validation� Checks the source and destination systems, POWER Hypervisor, Virtual I/O
� Servers, and mover service partitions for active partition migration capability and compatibility
� Checks that the RMC connections to the mobile partition, the source and destination Virtual I/O Servers, and the connection between the source and destination mover service partitions are established
� Checks that there are no required physical adapters in the mobile partition and that there are no required virtual serial slots higher than slot 2
� Checks that no client virtual SCSI disks on the mobile partition are backed by logical volumes and that no disks map to internal disks
� Checks the mobile partition, its OS, and its applications for active migration capability.
� Checks that the logical memory block size is the same on the source and destination systems
� Checks that the mobile partition is not configured with barrier synchronization registers
� Checks that the mobile partition is not configured with huge pages
� Checks that the partition state is active or running
� Checks that the mobile partition is not in a partition workload group
� Checks the uniqueness of the mobile partition’s virtual MAC addresses
� Checks that the mobile partition’s name is not already in use on the destination server
� Checks the number of current active migrations against the number of supported active migrations
� During validation HMC sends a command to the partition to prepare for hibernation.
� Work Management has 3 exit points for suspend/resume or mobility– The first one exit program to ask if it’s ok to proceed.– The exit program is called again for any action required before the
operation.– The exit for resume is called after the partition is resumed or moved
and it allows for any necessary cleanup.
� Current functions that will prevent suspend/resume– The partition is a member of an active cluster– A tape resource varied on.*
� Current functions that will prevent a migration– A tape resource varied on
� Active partition migration involves moving the stat e of a partition from one system to another while the partition is s till running.– Partition memory state is tracked while transferring memory state to
the destination system– Multiple memory transfers are done until a sufficient amount of
clean pages have been moved.
� Memory updates on the source system affect transf er time– Reduce the partition’s memory update activity prior to the migration
� Network speed affects the transfer time– Use a dedicated network, if possible– At least 1Gb speed– Possibly use link aggregated ports for more bandwidth
� In general, applications and the operating system are unaware that the partition is moved from one system to another.
� There are some exceptions to this:– Collection Services; when the partition is starting to run
on the target system, the Collection Services collector job will cycle the collection so correct hardware information is recorded on the target system.
Workload (Virtual Server) Resilience within a System Pool
� Relocate Virtual Servers between Hosts within the P ool- Determine best host placement within the pool- Supports single virtual servers and host evacuation
� Move virtual servers away from a failing host syste m.- Automate relocation and placement of virtual servers in response to
predicted host system failures with no disruption.
� Restart virtual servers when a host system fails.- Automate remote restart and placement of virtual servers in response
to host system failures with minimal disruption.- From a checkpoint in the future.
� Resilience policy associated with the workload- Provide workload resilience – yes/no- Enables host system monitoring for failures and predictive failures- Automates recovery action based on desire level of automation
� Automation policy associated with the workload- Automate = Advise / Automate- Advise – VMControl recommends actions and requires confirmation- Automate – VMControl automates actions
Performance and Scalability Services� The IBM i Performance & Scalability Services Center in Rochester can provide facilities, hardware and technical expertise
to assist you in testing hardware or software changes
� “Traditional” benchmarks
� Proofs of Concept (e.g. HA alternatives, SSD analysis, external storage, etc.)
� Stress test your system
� Evaluate application scalability
� Performance optimization and tuning
� Assess application performance when migrating to a new release of IBM i
The following are trademarks of the International B usiness Machines Corporation in the United States, other countries, or both.
The following are trademarks or registered trademar ks of other companies.
* All other products may be trademarks or registered trademarks of their respective companies.
Notes : Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.
For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:
*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market.
Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.