Top Banner
© 2008 IBM Corporation Live Partition Mobility Viraf Patel [email protected]
59
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: lpm

© 2008 IBM Corporation

Live Partition Mobility

Viraf [email protected]

Page 2: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation2 Apr 7, 2023

Agenda

Overview

Prerequisites

Validation

Migration

Effects

Demo

Supplemental Material

Page 3: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation3 Apr 7, 2023

Overview

Live Partition Mobility moves a running logical partition from one POWER6 server to another one without disrupting the operation of the operating system or applications

Network applications may see a brief (~2 sec) suspension toward the end of the migration, but connectivity will not be lost

Page 4: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation4 Apr 7, 2023

Overview

Live Partition Mobility is useful for

– Server consolidation

– Workload balancing

– Preparing for planned maintenance

• e.g., planned hardware maintenance or upgrades• In response to a warning of an impending hardware failure

Page 5: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation5 Apr 7, 2023

Overview

Inactive partition migration moves a powered-off partition from one system to another

Less restrictive validation process because the migrated partition will boot on the target machine; no running state needs to be transferred

Page 6: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation6 Apr 7, 2023

Overview

Live Partition Mobility is not a replacement for HACMP

– Planned moves only – everything functional

– It is not automatic on a failure event

– Partitions cannot be migrated from failed machines

– Moving a single OS; there is not a redundant, failover OS that an HACMP resource group is restarted in

It is not a disaster recovery solution

– Migration across long distances is not supported in the first release because of SAN and LAN considerations

Page 7: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation7 Apr 7, 2023

PrerequisitesFrom Fix Central website, Partition Mobility:http://www14.software.ibm.com/webapp/set2/sas/f/pm/component.html

Page 8: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation8 Apr 7, 2023

Prerequisites

Two POWER6 systems managed by a single HMC or IVM on each server

Advanced POWER Virtualization Enterprise Edition

VIOS 1.5.1.1 (VIO 1.5.0.0, plus Fixpack 10.1) plus interim fixesIZ08861.071116.epkg.Z – Partition Mobility fix642758_vio.080208.epkg.Z – VIO MPIO fixAX059907_3.080314.epkg.Z – USB Optical Drive fixIZ16430.080327.epkg.Z – various Qlogic Emulex FC fixes

retrieve interim fixes, place in VIO at /home/padmin/interim_fix# emgr –d –e IZ16430.080327.epkg.Z –v3 (as root, to see description)$ updateios –dev /home/padmin/interim_fix –install –accept (install as padmin)

VIOS 1.5.2.1 (VIO 1.5.0.0 plus Fixpack 11.1) rolls up all interim fixes – Preferred

Virtualized SAN Storage (rootvg and application vgs)

Virtualized Ethernet (Shared Ethernet Adapter)

Page 9: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation9 Apr 7, 2023

Prerequisites

All systems that will host a mobile partition must be on the same subnet and managed by a single HMC

– POWER6 Blades are managed by IVM instances

All systems must be connected to shared physical disks (LUNs) in a SAN subsystem with no scsi reserve

SDDPCM, SVC, RDAC based LUN – $ chdev –dev hdisk8 –attr reserve_policy=no_reserve

PowerPATH CLARiiON LUN –$ chdev –dev hdiskpower8 –attr reserve_lock=no

no LVM-based virtual disks – no virtual disk logical volumes carved in VIO

All resources must be shared or virtualized prior to migration (e.g., vscsi, virtual Ethernet)

Page 10: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation10 Apr 7, 2023

Prerequisites

The pHypervisor will automatically manage migration of CPU and memory

Dedicated IO adapters must be de-allocated before migration

cd0 in VIO may not be attached to mobile LPAR as virtual optical device

The operating system and applications must be migration-aware or migration-enabled

Page 11: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation11 Apr 7, 2023

Validation – High Level Active partition migration capability and compatibility check

Resource Monitoring and Control (RMC) check

Partition readiness

System resource availability

Virtual adapter mapping

Operating system and application readiness check

Page 12: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation12 Apr 7, 2023

Validation System Properties support Partition Mobility

– Inactive and Active Partition Mobility Capable = True

Mover Service Partitions on both Systems– VIO Servers with VASI device defined, and MSP enabled

Page 13: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation13 Apr 7, 2023

Migration

If validation passes, “finish” button starts migration

From this point, all state changes are rolled back if an error occurs

POWER Hypervisor

Source System Target System

POWER Hypervisor

Mobile Partition

Mobile Partition

MSP MSP

VASI VASI

Partition State Transfer Flow

1 23

4 5

Page 14: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation14 Apr 7, 2023

Migration Steps

The HMC creates a shell partition on the destination system

The HMC configures the source and destination Mover Service Partitions (MSP)

– MSPs connect to PHYP thru the Virtual Asynchronous Services Interface (VASI)

The MSPs set up a private, full-duplex channel to transfer partition state data

The HMC sends a Resource Monitoring and Control (RMC) event to the mobile partition so it can prepare for migration

The HMC creates the virtual target devices and virtual SCSI adapters in the destination MSP

The MSP on the source system starts sending the partition state to the MSP on the destination server

Page 15: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation15 Apr 7, 2023

Migration Steps

The source MSP keeps copying memory pages to the target in successive phases until modified pages have been reduced to near zero

The MSP on the source instructs the PHYP to suspend the mobile partition

The mobile partition confirms the suspension by suspending threads

The source MSP copies the latest modified memory pages and state data

Execution is resumed on the destination server and the partition re-establishes the operating environment

The mobile partition recovers I/O on the destination server and retries all uncompleted I/O operations that were going on during the suspension

– It also sends gratuitous ARP requests to all VLAN adapters (MAC address(es) are preserved)

Page 16: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation16 Apr 7, 2023

Migration Steps

When the destination server receives the last modified pages, the migration is complete

In the final steps, all resources are returned to the source and destination systems and the mobile partition is restored to its fully functional state

The channel between MSPs is closed

The VASI channel between MSP and PHYP is closed

VSCSI adapters on the source MSP are removed

The HMC informs the MSPs that the migration is complete and all migration data can be removed from their memory tables

The mobile partition and all its profiles are deleted from the source server

You can now add dedicated adapters to the mobile partition via DLPAR as needed, or put it in an LPAR workload group

Page 17: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation17 Apr 7, 2023

Effects

Server properties

• The affinity characteristics of the logical memory blocks may change

• The maximum number of potential and installed physical processors may change

• The L1 and/or L2 cache size and association may change

• This is not a functional issue, but may affect performance characteristics

Console

• Any active console sessions will be closed when the partition is migrated

• Console sessions must be re-opened on the target system by the user after migration

LPAR

• uname will change. Partition ID may change. IP address, MAC address will not change.

Page 18: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation18 Apr 7, 2023

Effects

Network

– A temporary network outage of seconds is expected to occur as part of suspending the partition • Temporary network outages may be visible to application clients, but it is

assumed that these are inherently recoverable

VSCSI Server Adapters

– Adapters that are configured with the remote partition set to the migrating partition will be removed • Adapters that are configured to allow any partition to connect will be left

configured after the migration• Any I/O operations that were in progress at time of the migration will be

retried once the partition is resumed– As long as unused virtual slots exist on the target VIO server, the necessary

VSCSI controllers and target devices will be automatically created

Page 19: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation19 Apr 7, 2023

Effects

Error logs

– When a partition migrates all of the error logs that the partition had received will appear on the target system

– All of the error logs contain the machine type, model, and serial number so it is possible to correlate the error with the system that detected it

Partition time

– When a partition is migrated the Time of Day and timebase values of the partition are migrated.

– The Time of Day of the partition is recalculated ensuring partition timebase value increases monotonically and accounting for any delays in migration.

Page 20: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation20 Apr 7, 2023

DEMO

Page 21: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation21 Apr 7, 2023

Environment Two POWER6 servers

– 8-way Mercury

• 01EM320_31– 16-way Zeus

• 01EM320_31 Single HMC managing both servers

– HMC V7.3.3.0

Mobile partition

– bmark26

• OS: AIX 6.1 6100-00-01-0748• Shared processor pool Test1• CPU entitlement: Min 0.20, Des 0.20, Max 2.00• Mode: Uncapped• Virtual Processors: Min 1, Des 2, Max 4• Disks: SAN LUN

Page 22: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation22 Apr 7, 2023

Supplemental Material

Page 23: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation23 Apr 7, 2023

Initial ConfigurationClient hdisk0, set hcheck_interval to 300 before reboot

Client sees one hdisk – with two MPIO paths lspath –l hdisk0

Paths are fail_over only. Noload balancing in client MPIO

hdisk6 and 7 in each VIO serverattached to vscsi serveradapter as a raw disk

No scsi reserve set on hdisk6, 7 in each VIO server. Also, with two fcs in a VIO server, change algorithm to round_robin for hdisk1. SDDPCM, RDAC, or PowerPATH driver installed in each VIO server

LUNs appears in each VIOserver as hdisk6, 7

RAID5 LUNs carved in storage, zoned to 4 FC adapters in the two VIO servers

SDDPCM SDDPCM

This LUN is zoned into another two VIO LPARs, on the other Power6 server also

Page 24: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation24 Apr 7, 2023

Initial Configuration (continued)

“Source” Power6 server mercury has dual VIO LPARs, ec01 and ec02. SEA Failover primary is ec01, backup is ec02.

“Destination” Power6 server zeus has dual VIO LPARs, sq17 and sq18. SEA Failover primary is sq17, backup is sq18

Profile for client partition bmark29_mobile has virtual scsi client adapter IDs 8 and 9 connecting to ec01 (39) and ec02 (39) respectively. Do NOT expect server adapter IDs to remain the same after partition move.

Page 25: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation25 Apr 7, 2023

Initial Configuration (continued)

In VIO LPARs ec01 and ec02, hdisk6 and hdisk7 are LUNs we use

for bmark26 and bmark29 mobile LPARs. $ lspv

NAME PVID VG STATUShdisk0 00c23c9f9a1f1da3 rootvg activehdisk1 00c23c9f9f5993e5 clientvg activehdisk2 00c23c9f2fb9e5a9 clientvg activehdisk3 00c23c9fb60af645 Nonehdisk4 none Nonehdisk5 none Nonehdisk6 00c23c9f291cc30b Nonehdisk7 00c23c9f291cc438 None

Without putting LUN hdisk7 in a volume group, we put a PVID on it$chdev –dev hdisk7 –attr pv=yes -perm

Page 26: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation26 Apr 7, 2023

Initial Configuration (continued)

DS4300, RDAC LUNs can be identified by IEEE Volume Name

$ cat sk_lsdiskfor d in `ioscli lspv | awk '{print $1}'`doecho $d `ioscli lsdev -dev $d -attr | grep ieee | awk '{print $1" "$2}' `done$ sk_lsdiskNAMEhdisk0hdisk1hdisk2hdisk3hdisk4hdisk5hdisk6 ieee_volname 600A0B800016954000001C7646F142A6hdisk7 ieee_volname 600A0B8000170BC10000142846F124AD

Have found that ieee_volname will not be visible up in the client LPAR

Page 27: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation27 Apr 7, 2023

Initial Configuration (continued)

CLARiiON PowerPATH LUNs can be identified by Universal Identifier (UI)

$ cat sk_clariionfor d in `ioscli lspv | grep hdiskpower | awk '{print $1}'`doioscli lsdev -dev $d -vpd | grep UI | awk '{print $1“ “$2}’done

Page 28: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation28 Apr 7, 2023

Initial Configuration (continued)

In both VIO LPARs on “Source” Power6 server mercury, hdisk7 is attached to virtual scsi server adapter ID 39

$ cat sk_lsmap#!/usr/bin/rksh# sk_lsmap##PATH=/usr/ios/cli:/usr/ios/utils:/home/padmin:for v in `ioscli lsdev -virtual | grep vhost | awk '{print $1}'` do ioscli lsmap -vadapter $v -fmt : | awk -F: '{print $1" "$2" "$4" "$7" "$10}‘ done$ sk_lsmapvhost0 U9117.MMA.1023C9F-V1-C11 vt_ec04 client2lvvhost1 U9117.MMA.1023C9F-V1-C12 vt_ec03 nimclientlvvhost2 U9117.MMA.1023C9F-V1-C15 vt_ec05 client3lvvhost3 U9117.MMA.1023C9F-V1-C32 vt_ec07 hdisk3vhost4 U9117.MMA.1023C9F-V1-C20 vt_bmark26 hdisk6vhost5 U9117.MMA.1023C9F-V1-C13vhost6 U9117.MMA.1023C9F-V1-C14 vtscsi0 hdisk6vhost7 U9117.MMA.1023C9F-V1-C16vhost8 U9117.MMA.1023C9F-V1-C21vhost9 U9117.MMA.1023C9F-V1-C39 vt_bmark29 hdisk7

Page 29: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation29 Apr 7, 2023

Initial Configuration (continued)

The client LPAR was activated, booted to SMS, Remote IPL setup, boot on virtual Ethernet adapter, from the NIM master.

Target disk selection – Option 77, alternative disk attributes…>>> 1 hdisk0 00c23c9f291cc438

Option 77 again…>>> 1 hdisk0 U9117.MMA.1023C9F-V9-C8-T1-L8100000000000

PVID from VIO shows up in client

netboot

No MPIO in network boot image, so disk only

shows up on first vscsi client adapter ID 8

Page 30: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation30 Apr 7, 2023

Initial Configuration (continued) NIM install completes. One command included in the NIM script resource,

running at the end of install, and before bootchdev -l hdisk0 -a hcheck_interval=300 –P

Sets MPIO to test failed and non-active paths every 5 minutes, bring them online if available.

The newly Installed and booted LPAR has two vscsi client adapters# lsdev -Cc adapter -F "name physloc" | grep vscsivscsi0 U9117.MMA.1023C9F-V9-C8-T1vscsi1 U9117.MMA.1023C9F-V9-C9-T1

Two MPIO paths to hdisk0# lspath -l hdisk0Enabled hdisk0 vscsi0Enabled hdisk0 vscsi1

The PVID we expected does come thru from VIO to the Client LPAR# lspvhdisk0 00c23c9f291cc438 rootvg active

The table is now set for Live Partition Mobility

Page 31: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation31 Apr 7, 2023

Starting Mobility

Page 32: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation32 Apr 7, 2023

Starting Mobility

Page 33: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation33 Apr 7, 2023

Starting Mobility

If you specify a new profile name, your

initial profile will be saved. But

do NOT assume it is bootable, or usable on return

to “source” server. VIO

mappings will change.

Page 34: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation34 Apr 7, 2023

Starting Mobility

There might be more than one

destination server to

choose from

Page 35: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation35 Apr 7, 2023

Starting Mobility

… then …

Page 36: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation36 Apr 7, 2023

Starting Mobility

I selected the pair that were both

SEA Failover

primary, but any pair

should do here

Page 37: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation37 Apr 7, 2023

Starting Mobility

Verify that the required

(possibly tagged) VLAN is available

Page 38: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation38 Apr 7, 2023

Starting Mobility

These are my client

LPAR vscsi adapter IDs, matched to destination VIO LPARs

Page 39: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation39 Apr 7, 2023

Starting Mobility

You may select from

different shared

pools on the destination

server

Page 40: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation40 Apr 7, 2023

Starting Mobility

Left to default

Page 41: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation41 Apr 7, 2023

Starting Mobility

The moment we’ve

waited for…

Page 42: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation42 Apr 7, 2023

As migration starts, in the “All Partitions” view we

see the LPAR residing on both Power6 servers

Page 43: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation43 Apr 7, 2023

Further along in the migration, we see

the LPAR in “Migrating-Running”

Status

Page 44: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation44 Apr 7, 2023

Migration CompleteMigrated LPAR

resides solely on new server.

Page 45: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation45 Apr 7, 2023

Migration CompleteMigration preserved my old profile, and created a new one

Same client adapter IDs, but different VIO server adapter IDs

Page 46: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation46 Apr 7, 2023

Device Mapping after Migration

Migration used new VIO server adapter IDs, even when same adapter IDs were available

$ hostnamesq17$ sk_lsmapvhost0 U9117.MMA.109A4AF-V1-C15vhost1 U9117.MMA.109A4AF-V1-C16vhost2 U9117.MMA.109A4AF-V1-C39vhost3 U9117.MMA.109A4AF-V1-C14 vtscsi0 hdisk7

When you migrate back, do not expect to be back on your original VIO Server adapter IDs. Your old client LPAR profile is historical, but will not likely be usable without some reconfig. Best to create a new profile on the way back over.

Migration did not use ID 39 in

destination VIO LPARs

Page 47: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation47 Apr 7, 2023

Device Mapping after Migration

Back on the “source” server, device mappings for your client LPAR have been completely removed from the VIO LPARs

$ hostname ec01ec01$ sk_lsmapvhost0 U9117.MMA.1023C9F-V1-C11 vt_ec04 client2lvvhost1 U9117.MMA.1023C9F-V1-C12 vt_ec03 nimclientlvvhost2 U9117.MMA.1023C9F-V1-C15 vt_ec05 client3lvvhost3 U9117.MMA.1023C9F-V1-C32 vt_ec07 hdisk3vhost4 U9117.MMA.1023C9F-V1-C20 vt_bmark26 hdisk6vhost5 U9117.MMA.1023C9F-V1-C13vhost6 U9117.MMA.1023C9F-V1-C14 vtscsi0 hdisk6vhost7 U9117.MMA.1023C9F-V1-C16vhost8 U9117.MMA.1023C9F-V1-C21

No longer a vhost adapter ID 39

(compare with page 30)

Page 48: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation48 Apr 7, 2023

Interpartition Logical LAN, inside one Power6

Migration can preserve an internal, LPAR to LPAR network

The LPAR to migrate has virtual Ethernet adapter

Added this adapter to the Profile

DLPAR same adapter into the running LPAR

We added Ethernet adapter ID 5, on a different VLAN - 5

New adapter is on VLAN 5

Page 49: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation49 Apr 7, 2023

Interpartition Logical LAN, inside one Power6

cfgmgr in running AIX LPAR, DLPAR’d adapter is in# lsdev –Cc adapter –F « name physloc" | grep ent[0-9]ent0 U9117.MMA.109A4AF-V9-C2-T1ent1 U9117.MMA.109A4AF-V9-C5-T1

smitty chinet, configure en1 interface# netstat -inName Mtu Network Address Ipkts Ierrs Opkts OerrsCollen0 1500 link#2 4e.c4.31.a8.cf.2 540066 0 46426 0 0en0 1500 9.19.51 9.19.51.229 540066 0 46426 0 0en1 1500 link#3 4e.c4.31.a8.cf.5 0 0 3 0 0en1 1500 192.168.16 192.168.16.1 0 0 3 0 0lo0 16896 link#1 301 0 318 0 0lo0 16896 127 127.0.0.1 301 0 318 0 0lo0 16896 ::1 301 0 318 0 0

Perform the Migration again, back to “source” server mercury

Page 50: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation50 Apr 7, 2023

Interpartition Logical LAN, inside one Power6

We do get an “error” reported, that there is no support in source VIO servers for VLAN 5.

VIO LPARs on source and destination Server must have virtual adapter on VLAN 5, and this adapter must be “joined” into the SEA

Page 51: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation51 Apr 7, 2023

DLPAR new virtual Ethernet adapter into VIO LPARs

Do the DLPAR of adapter into both source VIO LPARs, and both destination LPARs

The new VLAN id

MUST trunk to join SEA

Priority MUST match existing

trunked SEA virtual

Page 52: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation52 Apr 7, 2023

Adapter DLAR’d into VIOs, but not joined to SEA

Slightly different error – mkvdev the new virtual onto the SEA

Page 53: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation53 Apr 7, 2023

Which adapter to join?

Do in each of the four VIO LPARs – adapter numbers might not be same

$ lsdev -type adapter -field name physloc | grep ent[0-9]ent0 U789D.001.DQDXYCW-P1-C10-T1ent1 U9117.MMA.109A4AF-V2-C11-T1ent2 U9117.MMA.109A4AF-V2-C12-T1ent3 U9117.MMA.109A4AF-V2-C13-T1ent4

$ cfgdev

$ lsdev -type adapter -field name physloc | grep ent[0-9]ent0 U789D.001.DQDXYCW-P1-C10-T1ent1 U9117.MMA.109A4AF-V2-C11-T1ent2 U9117.MMA.109A4AF-V2-C12-T1ent3 U9117.MMA.109A4AF-V2-C13-T1ent4ent5 U9117.MMA.109A4AF-V2-C18-T1

$ chdev –dev ent4 –attr virt_adapters=ent1,ent5ent4 changed

The newly DLPAR’d in

virtual adapter

Both trunked virtual adapters

Page 54: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation54 Apr 7, 2023

Which adapter to join? Possible errors on chdev

Forgot to hit “external access” checkbox on new virtual adapterchgsea: Ioctl NDD_SEA_MODIFY returned error 64 for device ent4

Trunk priority on new virtual did not match the existing trunked virtual adapterchgsea: Ioctl NDD_SEA_MODIFY returned error 22 for device ent4

Page 55: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation55 Apr 7, 2023

Now in the Validation before Migration…

Both VLAN ids show up in both destination VIO servers

Page 56: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation56 Apr 7, 2023

Ready to Finish…

Page 57: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation57 Apr 7, 2023

Another potential error…

Error configuring virtual adapter in slot 23 – we had no vhost in slot 23Virtual Optical device vtopt0 (cd0) cannot be attached to vhost adapter of migrating LPAR - not obvious.rmdev –l cd0 –d (in client LPAR)rmdev –dev vtopt0 (in VIO server)Repeat validation

Page 58: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation58 Apr 7, 2023

Reference

Live Partition Mobility Redbookhttp://www.redbooks.ibm.com/redbooks/pdfs/sg247460.pdf

Page 59: lpm

IBM Training - 2008 Systems Technical Conference

© 2008 IBM Corporation59 Apr 7, 2023

TrademarksThe following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.

The following are trademarks or registered trademarks of other companies.

* All other products may be trademarks or registered trademarks of their respective companies.

Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market.

Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.