IBM Systems Enterprise Architecture – GPFS Solution 1 Advanced GPFS Solution Design on Cross Platform Infrastructure Residency Date: 2009/10/26 (5Days) Modfy Start: 2009/12/15 Last Update: 2010/04/29 Modify by [email protected], Review by [email protected]IBM System x Technical Sales Team
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
IBM Systems Enterprise Architecture – GPFS Solution
1
Advanced GPFS Solution Design on Cross Platform Infrastructure
Residency Date: 2009/10/26 (5Days) Modfy Start: 2009/12/15 Last Update: 2010/04/29 Modify by [email protected] , Review by [email protected] IBM System x Technical Sales Team
IBM Systems Enterprise Architecture – GPFS Solution
2
This is one of the business partner skill enhancement programs is Residency in Korea. At that time joined 17 companies for this residency. First, I want to say “Thanks join this program”. This documentation wrote by partner engineer. Just, I translate to English from Korean. Before starting this residency program, a lot of team was preparing to help from education dept, technical sales manager and system admin team. Not easy to prepare demo system for this program, But support team makes all of the demo system such as system p6 system, storage box and BladeCenter System. I can assure all of the attendee that gains a lot of configuration experience and technical knowledge. This is very helpful program for our business partner. Demo System Description:
System p7 570 2EA DS Storage 4EA BladeCenter 2EA BladeServer 12EA San Switch 2EA Network Switch 4EA
Business Partner Residency Program is one of the education programs in Korea. Usually, the locations setup the out side from Seoul, such as YangPyung or ChungPyung. They will stay 5 days in the resort. And then starting instruction focused topic. After base education, then starting test system and write result documents. It is irregular education program, because this kind of topic was setup by team discussion and change requirement from BP every year. One of the rules is one time execution by topic.
Ojective this residency program, at this time topic is Advanced GPFS Solution design on cross
platform. Recently, customer requirement does not configure single platform for GPFS Solution. They are want to mixed configuration such as Linux, pLinux, AIX and Windows. And Cross mount function is mount remote area file system for collaboration. They are wanted to know what limitation configuration for mixed cluster is consideration point and how configuration storage box for optimized performance.
IBM Systems Enterprise Architecture – GPFS Solution
3
IBM Systems Enterprise Architecture – GPFS Solution
18. Cross over GPFS Mount _________________________________________________ 107
19. Failure group and GPFS Replication _______________________________________ 111
20. End of This bp residency _________________________________________________ 114
IBM Systems Enterprise Architecture – GPFS Solution
5
1. Preparing Hardware
This is node configuration for assigned residency team. Each team use same configuration hardware system. System p6 570 9117-MMA 50% Use (Partition) BladeCenter HS21 3Nodes San Switch 1EA Network Switch 1EA Storage DS 3k or 4k 1EA
Before Configuration and OS Installation, must be check below list. All of the below list is very important for configuration gpfs, because getting more stability and high performance. I recommand use latest version of system firmware and driver.
Name (Firmware) Version Name (Software) Version
System BIOS (System p,x) AIX Version and Patch Level
Internal Disk Firmware Linux Version and Update Level
Onboard Network Firmware Onboard Network Drv for Linux
AMM Firmware for BCH Multipath Driver for Linux (RDAC)
Ethernet Switch Module for BCH HBA Driver for Linux
External Swtich Firmare Windows Version (Only 64Bit)
SAN Switch Module for BCH Onboard Network Drv for Windows
External San Switch Firmware Multipath Driver for Windows
SAN Switch ID (License) Storage Manager
Storage Controller Firmware IBM GPFS Software
Disk Firmware for Storage CIFS Server
IBM Systems Enterprise Architecture – GPFS Solution
6
2. Installation Redhat Enterprise Linux Server v5.4 x64 Version
Boot RHEL v5.4 Media Skip media test
Choose Language Choose keyboard
/boot 100M, Swap 4095MB, / 50G
IBM Systems Enterprise Architecture – GPFS Solution
7
Setup the Boot loader Configuration IP, Hostname
Setup timezone Setup Root password
Choose package Start Installation Pkg
You must include development package for build GPFS portable layer on first node at least. Usually, First step of installation is build GPFS Portble Layer on first installed system, and then you can use “make rpm” this is make rpm version of portable layer.
IBM Systems Enterprise Architecture – GPFS Solution
8
Start Installation
Post Installation step. 1. Disable unused daemon 2. Must be off Selinux function and reboot system 3. Stop iptables daemon or configuration firewall for GPFS Daemon need 22, 1191 target port 4. Update Network Drive
IBM Systems Enterprise Architecture – GPFS Solution
9
3. Installation Windows 2008 R2 x64 Enterprise Version
Choose Language and Location Choose version of Windows 2008 R2 Enterprise (Full Version)
Start Installation Rebooting
Complete Installation
IBM Systems Enterprise Architecture – GPFS Solution
10
After Installation and Login Screen
At that time, Team2 and Team3 member was trying to installation GPFS v3.3 with Windows 2008 R2 System. They was can not configuration this operating system. Finally, they are reinstallation Windows2008 SP2 on System. Refer GPFS v3.3 Document what is current version of GPFS v3.3 support Windows2008 SP2 Only. It is many differences between 2008 and 2008 R2 core. Windows 2008 R2 Core system has based on Windows7.
According to WW GPFS development team, GPFS v3.4 will support with Windows 2008R2 Server System, This product announce plan is 2h 2011. And this version will support Windows Base GPFS Server System. Current version (v3.3) support GPFS Client Side only. In other words must be configuring mixed cluster Linux and Windows.
IBM Systems Enterprise Architecture – GPFS Solution
11
4. Preparing VIOS Client
Usually, before installation AIX on System, you must config the partition on p570 system.
Create Logical partition
Input the partition ID and Name
Input the profile name
IBM Systems Enterprise Architecture – GPFS Solution
12
Choose type of allocation processor resource.
Input Processor resource information.
Choice type of memory resource
IBM Systems Enterprise Architecture – GPFS Solution
13
Input Memory Size
Choose PCI Adapter (Network and HBA)
For make virtual SCSI Adapter, Click drop down menu, Action Create SCSI Adapter. Then no
problem use default SCSI ID. At this time, important thing is assign which virtual system or partition. And then target partition need to choice adapter ID. On the both of the server and client parition makes vscsi device is possible. And then you decide mapping ID.
IBM Systems Enterprise Architecture – GPFS Solution
14
Virtual Adapter configuration Previsouly, Configured virtual SCSI adapter on server and client assign both of them.
Applied virtual SCSI Adappter
Not Use Logical Host Ethernet Adapters (Just Click Next)
IBM Systems Enterprise Architecture – GPFS Solution
15
Choice same option and next
You can see summary table about the partition
This is complete make a logical partition on systme p6 570.
IBM Systems Enterprise Architecture – GPFS Solution
16
5. Configuration NIM Server
After complete build logical patition, then setup NIM Server and Client.
Connect NIM Server and edit /etc/hosts. This ip and host name will use vio client side information.
Smitty nim Perform NIM Administration Tasks.
Choice Manage Machines
IBM Systems Enterprise Architecture – GPFS Solution
17
Choice Define a Machine
Input NIM Client Hostname
Choice Cable Type
IBM Systems Enterprise Architecture – GPFS Solution
18
Back to the NIM Main Menu and Choice Perform NIM Software Insatallation and Maintenance Tasks.
Choice Install and Update Software
Choice Install the Base Operating System on Standalone Clients
IBM Systems Enterprise Architecture – GPFS Solution
19
Select Target system
Already prepare system image by mksysb backup. So, client target system will installation mksysb image OS.
Select mksysb Image version.
IBM Systems Enterprise Architecture – GPFS Solution
20
Select Installation spot
ACCEPT new license agreements field must be yes, and Initiate reboot and installation now field must be no. If this field set to yes, then NIM Server reboot when is complete installation.
IBM Systems Enterprise Architecture – GPFS Solution
21
6. Make VIO Client Logical Volume
Before start installation progress, need to configure logical volume and then assign target volume for installation OS. When is connect to VIO Server, Not use root account. This is recommentation.
Defatult Account Information = ID/padmin, PASSWD/padmin
When use padmin account. It is limit of right So, you must change of authority for use oem_setup_env or license –accept command, then you are no limit for use admin command.
Add a logical volume for VIO Client. You can use smitty lv and then choose Add a Logical Volume.
IBM Systems Enterprise Architecture – GPFS Solution
22
Choose volume group for add LV, and then choice rootvg. Input lnformation for LV
IBM Systems Enterprise Architecture – GPFS Solution
23
You can check assign status of logical volume via “lsgv –l rootvg”
IBM Systems Enterprise Architecture – GPFS Solution
24
7. Installation AIX on Partition
This step is configuration for network boot, Choice SMS Menu. Choice Setup Remote IPL
IBM Systems Enterprise Architecture – GPFS Solution
25
Choice ethernet device Choice ip range
IBM Systems Enterprise Architecture – GPFS Solution
26
Choice BOOTP This menu is setup ip address on NIC Adapter. It is loading mksysb image from NIM server.
IBM Systems Enterprise Architecture – GPFS Solution
27
Complete setup ip address then exit this menu. This is boot screen pxe client via NIM Server.
IBM Systems Enterprise Architecture – GPFS Solution
28
Choice 1 Choice 1
IBM Systems Enterprise Architecture – GPFS Solution
29
Choice 2, copy from the mksysb image. Do not change any other option for copy from the image.
IBM Systems Enterprise Architecture – GPFS Solution
30
This screen is showing installation progress. AIX Installation Complete
IBM Systems Enterprise Architecture – GPFS Solution
31
8. Configuration Storage System (DS3400)
The first setp is download latest version of storage manager and installation.
Check status of initialize HBA card
IBM Systems Enterprise Architecture – GPFS Solution
32
Make Host group and HOST Modify Host topology
IBM Systems Enterprise Architecture – GPFS Solution
33
Check define host type Volume Mapping
IBM Systems Enterprise Architecture – GPFS Solution
34
Volume Mapping Status
DS3400 Firmware Update
IBM Systems Enterprise Architecture – GPFS Solution
35
Disk Firmware Update
Almost storage systems are similar progress for attached server and storage. 1. Hardware Configuration and Complete Cabling 2. San switch Configuration such as domain ID and some kind of timeout value 3. Setup the volume configuration for recommend GPFS File system 4. Host type and group configuration 5. Volume mapping 6. HBA Driver update and installation on each server system 7. Check Volume on each system.
IBM Systems Enterprise Architecture – GPFS Solution
36
9. Configuration Storage System and Initilize Volume each OS (DS4300)
Check current firmware level of installed system box. 2009/10/26 Latest Firmware Level 06.60.22.00
Stoage Volume Configuration and SAN Swithc Firmware Update.
IBM Systems Enterprise Architecture – GPFS Solution
37
p1:/#>lscfg -v -l fcs0 | grep Net Network Address.............10000000C963E03A p1:/#>lscfg -v -l fcs1 | grep Net Network Address.............10000000C963E03B p2:/#>lscfg -v -l fcs1 | grep Net Network Address.............10000000C967B415 p2:/#>lscfg -v -l fcs0 | grep Net Network Address.............10000000C967B416
Check WWN on AIX Server for configuration zoning.
Check WWN on Linux Server for Installation Qlogic HBA CLI Command.
IBM Systems Enterprise Architecture – GPFS Solution
38
Configure Zoning on SAN Switch. Volume Assign to each node system.
IBM Systems Enterprise Architecture – GPFS Solution
39
Check Initilize Volume on AIX Server
IBM Systems Enterprise Architecture – GPFS Solution
40
Install RDAC on Linux server.
IBM Systems Enterprise Architecture – GPFS Solution
41
Boot Loader Configuration
IBM Systems Enterprise Architecture – GPFS Solution
42
Check Initialized Volume on Linux Box.
IBM Systems Enterprise Architecture – GPFS Solution
43
10. SAN Switch Congiruation Guide
This is key point what configuration for multi SAN Switch Infra. The recommend configuration of SAN Switch is same vendor and Fabric OS. Choose one vendor such as Brocade Company. If you want to attache heterogenouse SAN Switch, must refer to interoperatibility guide. Under same switch of vendor, you must check domain ID and time out value.
Check List 1. SAN Switch Domain ID 2. Timeout Value of SAN Switch.
A. R_A_TOV = 10 seconds ( The setting is 10000 ) B. E_D_TOV = 2 seconds ( The setting is 2000 )
3. ISL License for SAN Switch
IBM Systems Enterprise Architecture – GPFS Solution
44
If same ID of each SAN Swtcih, Set to disable on external SAN Switch. And then apply ISL license
on switch and change Domain ID, after set to enable on external SAN Switch. Enabled Extented Fabric (ISL License)
IBM Systems Enterprise Architecture – GPFS Solution
45
When is configured Zoning, Delete All of Zone Configuration, And then Configuration ISL on IBM Bladecenter SAN Switch Module. I think that factory default setting, this is easy. Connect to 192.168.70.129 via HTTP. Be Careful. Before ISL Configuration, remove external cable. If not remove external cable, confict Domain ID on both SAN Switch.
IBM Systems Enterprise Architecture – GPFS Solution
46
Click Admin. And then Check Domain ID.
Set to Disable.
IBM Systems Enterprise Architecture – GPFS Solution
47
Change 3 from 1 and Apply.
IBM Systems Enterprise Architecture – GPFS Solution
48
Check Changed Domain ID. Configure Zone on Each Management Interface.
IBM Systems Enterprise Architecture – GPFS Solution
49
Configure Zone.
IBM Systems Enterprise Architecture – GPFS Solution
50
11. Pre Installation GPFS - SSH Keygen
Configure HOST info
There are running all of the GPFS server and client side.
t1:/#>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa): Created directory '//.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in //.ssh/id_rsa. Your public key has been saved in //.ssh/id_rsa.pub. The key fingerprint is: bd:40:09:86:b7:89:a9:ae:40:a7:ed:51:3d:ae:18:7c root@T1 t1:/#>cd /.ssh t1:/.ssh#>cp -rp id_rsa.pub authorized_keys t1:/.ssh#>ls authorized_keys id_rsa id_rsa.pub t1:/.ssh#>cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA5nZUpuqDXCgQ5OEp1GzD5PTH0qjZufrLbUWPPMsfYVPBJsRxAyTQIDluaYQXVz+pCer4p87/HZNenqI9kgf9tJHC9RPhPLZxjyUauVgADvCmkzHm1TbKltwwnjawhZ1Oj8gY2FEhZPhSf7YEp5ysrNLQvR12li8VosDSSRuqNp3nBS5G5PYmMB0h0OGO48ZxB3Gf6R3QUZqaoX4SZl9SinG8lF5sze9x8t/l0GKBQ3RtcHBjx7iHdSrOaETEaFhco/1QLcjBPtSKK7jT4FDi7dD0XEHN4k0B5IdJYtx2Nl6Y6g1a5SpnTTm5n0QKe2buznMgD0TmML1PaaXnNDIUbw== root@t1 t2:/#>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa): Created directory '//.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in //.ssh/id_rsa. Your public key has been saved in //.ssh/id_rsa.pub. The key fingerprint is: 19:33:82:5c:15:e5:60:fb:f2:8b:ce:50:5c:2d:03:6d root@T2 t2:/#>ls id_rsa id_rsa.pub t2:/.ssh#>cp -rp id_rsa.pub authorized_keys t2:/.ssh#>cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAusPjMndj2JRzHaseb7/9/d8AdOsvtDBr8pZIQ/Aac48F/2iepmuogJjdxohbCYSSRjfTz35No+hNuLpYZpgvS/2+uco9dXnHZv7HJV+4rdwTREqJplLKZvPMrBNEkKLkHiP1NJ3hq5bHeMEDyCKt/LYGcwl/VN3+nGXcJ2b5lsE= root@T1
IBM Systems Enterprise Architecture – GPFS Solution
51
t2:/.ssh#> t1:/.ssh#>scp id_rsa.pub t2_gpfs:/home The authenticity of host 't2_gpfs (10.10.10.2)' can't be established. RSA key fingerprint is 0b:01:ad:da:58:5d:eb:40:71:f9:40:c3:d1:a0:8e:14. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't2_gpfs,10.10.10.2' (RSA) to the list of known hosts. root@t2_gpfs's password: id_rsa.pub 100% 389 0.4KB/s 00:00 t2:/.ssh#>scp id_rsa.pub t1_gpfs:/home The authenticity of host 't1_gpfs (10.10.10.1)' can't be established. RSA key fingerprint is 40:ff:29:0b:fb:b6:68:79:ee:5c:63:b5:ab:b9:f7:f2. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't1_gpfs,10.10.10.1' (RSA) to the list of known hosts. root@t1_gpfs's password: id_rsa.pub 100% 389 0.4KB/s 00:00 t1:/home#>cat id_rsa.pub >> /.ssh/authorized_keys t1:/home#>cat /.ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAww2LlIJZxfAgLiIm8dPq+glByIziJC8L3294c3lTgvPDswNPlzf4PBB8+cz/hGoehuQMBP4l8tYONFABOxsMLFkYpxjv9EKL9SQ4PTiqPV+FJwaaWEK9fg/FD+JXwL1KHetyaYHAmgFzJFrAF7XIO+1303sRkOSOzYUSWMgPG5X8cH22sSchUgwed6xsxBkcx3oknirJp24mvfRmG+WFQB84FN04e0dSdcrsU3BMOYq0QZCqGQsHdGOak70legxHI4njq7DPJFM9vTiYVRsl2ylPzi65a3bWwT3XjwyHA2s+QNVYBftVfCe5wfPHmsu/arS3zyimcM+nCYxpkUs69Q== root@t1 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu3rX612XoGaOBdvD5TpgjpfXZCx6SiXA6A+5n/AAt3Av6ilVelZ40mMK07qg2/l+586yjrkAdyUKKJ+GstGovGWZHqKnLOSiSpmkYMRHplKArW4nyrK7MPMn6YL8WDz/lF8HNd157usesqzFA3R1IpiDKfTdd22z/4EQXJzljbkblZCZTJ/QrlfksXw2XrrmcPfl8g35od3Cid4rOm7UyWiIYHNMZGCxYHlFxdw9Z+o/I85Mu6mbZOlP8AGeoq4QmjvGFeOv/WM95nDymXebB3OT9XPgKV/8HFRaMXlh+9aBBsKctxYixswzjOuuMpZohMqwbp1yaFHScWYoxsa3rQ== root@t2 t2:/home#>cat id_rsa.pub >> /.ssh/authorized_keys t2:/home#>cat /.ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAu3rX612XoGaOBdvD5TpgjpfXZCx6SiXA6A+5n/AAt3Av6ilVelZ40mMK07qg2/l+586yjrkAdyUKKJ+GstGovGWZHqKnLOSiSpmkYMRHplKArW4nyrK7MPMn6YL8WDz/lF8HNd157usesqzFA3R1IpiDKfTdd22z/4EQXJzljbkblZCZTJ/QrlfksXw2XrrmcPfl8g35od3Cid4rOm7UyWiIYHNMZGCxYHlFxdw9Z+o/I85Mu6mbZOlP8AGeoq4QmjvGFeOv/WM95nDymXebB3OT9XPgKV/8HFRaMXlh+9aBBsKctxYixswzjOuuMpZohMqwbp1yaFHScWYoxsa3rQ== root@t2 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAww2LlIJZxfAgLiIm8dPq+glByIziJC8L3294c3lTgvPDswNPlzf4PBB8+cz/hGoehuQMBP4l8tYONFABOxsMLFkYpxjv9EKL9SQ4PTiqPV+FJwaaWEK9fg/FD+JXwL1KHetyaYHAmgFzJFrAF7XIO+1303sRkOSOzYUSWMgPG5X8cH22sSchUgwed6xsxBkcx3oknirJp24mvfRmG+WFQB84FN04e0dSdcrsU3BMOYq0QZCqGQsHdGOak70legxHI4njq7DPJFM9vTiYVRsl2ylPzi65a3bWwT3XjwyHA2s+QNVYBftVfCe5wfPHmsu/arS3zyimcM+nCYxpkUs69Q== root@t1 t2:/home#>
Finally, all of id_rsa.pub files on each node. And Copy to authorized_keys. This file include rsa key of all nodes finder print. Also, Windows GPFS Client side will need same operation.
IBM Systems Enterprise Architecture – GPFS Solution
52
12. AIX, Linux GPFS Server Installation
Update HOST Information on AIX Server. Check pkg List and running smitty. Choice Install and Update from ALL Available Software
Define location of Installation file.
IBM Systems Enterprise Architecture – GPFS Solution
53
Press “F4” Choice pkg for installation, and press “F7”
IBM Systems Enterprise Architecture – GPFS Solution
54
Change Accept New License agreements. And press “Enter”
You must update latest version of GPFS. If it is not complete, GPFS daemon will not start. This is same procedure for update. Update user profile.
IBM Systems Enterprise Architecture – GPFS Solution
55
Check Installed Status
Update HOST information on Linux Server Install base package Install Update Package and Check Installation Status.
IBM Systems Enterprise Architecture – GPFS Solution
56
Update user profile.
Make and Install portable layer on Linux System. This step is only Linux System. This is build gpfs module layer for Linux Kernel.
IBM Systems Enterprise Architecture – GPFS Solution
GPFS: 6027-531 The following disks of gpfs02 will be formatted on node team4_2:
nsd_03: size 52428800 KB
nsd_04: size 52428800 KB
GPFS: 6027-540 Formatting file system ...
GPFS: 6027-535 Disks up to size 710 GB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
IBM Systems Enterprise Architecture – GPFS Solution
61
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
GPFS: 6027-572 Completed creation of file system /dev/gpfs02.
mmcrfs: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
Mount file system. team4_1:/tmp/gpfs#>mmmount /gpfs01
Wed Oct 28 21:33:01 KORST 2009: 6027-1623 mmmount: Mounting file systems ...
team4_1:/tmp/gpfs#>mmmount /gpfs02
Wed Oct 28 21:33:06 KORST 2009: 6027-1623 mmmount: Mounting file systems ...
team4_1:/gpfs02#>df -gt
Filesystem GB blocks Used Free %Used Mounted on
...
/dev/gpfs01 100.00 0.06 99.94 1% /gpfs01
/dev/gpfs02 100.00 0.07 99.93 1% /gpfs02
team4_2:/#>df -gt
Filesystem GB blocks Used Free %Used Mounted on
...
/dev/gpfs01 100.00 0.75 99.25 1% /gpfs01
/dev/gpfs02 100.00 0.19 99.81 1% /gpfs02
team4_3:/#>df -gt
Filesystem GB blocks Used Free %Used Mounted on
...
/dev/gpfs01 100.00 0.06 99.94 1% /gpfs01
/dev/gpfs02 100.00 0.07 99.93 1% /gpfs02
IBM Systems Enterprise Architecture – GPFS Solution
62
14. pLinux GPFS Client Installation
First, running ssh-keygen and this key file sync all of gpfs nodes. And set to disable Selinux and iptables service. After need to one time system reboots. Check Network Configuration [root@plinux ~]# ifconfig
IBM Systems Enterprise Architecture – GPFS Solution
76
15. Windows 2008 SP2 GPFS Client Installation
Check OS Version of Windows Server Change locale and keyboard to US and English.
IBM Systems Enterprise Architecture – GPFS Solution
77
IBM Systems Enterprise Architecture – GPFS Solution
78
Complete change Enligsh(US) Locale Setup and next user configuration.
IBM Systems Enterprise Architecture – GPFS Solution
79
Choice root account. The root user change admin account type.
IBM Systems Enterprise Architecture – GPFS Solution
80
Configuration Firewall - Set to disable
UAC Disable
IBM Systems Enterprise Architecture – GPFS Solution
81
System Reboot Installation Utility for subsystem for UNIX-Based Applications
IBM Systems Enterprise Architecture – GPFS Solution
82
Choice SUA Package
IBM Systems Enterprise Architecture – GPFS Solution
83
IBM Systems Enterprise Architecture – GPFS Solution
84
SUA Installation
Download SUA Package Site Link: http://www.microsoft.com/downloads/details.aspx?familyid=93ff2201-325e-487f-a398- efde5758c47f&displaylang=en&Hash=IKXVxKqCKZcIPQFORRixLddbWfc2mSSt9JKcfApD6FwVpzi2% 2f5oT4sIDTlhxY30lEcYD3MS9v1GgYwfy%2fUazew%3d%3d
IBM Systems Enterprise Architecture – GPFS Solution
85
Setup
IBM Systems Enterprise Architecture – GPFS Solution
86
IBM Systems Enterprise Architecture – GPFS Solution
87
Complete Installation SUA Package
Open Korn Shell on Subsystem for UNIX-based Application, and login root user as Windows Admin
Password (su -) Installation Additional Package for SUA
IBM Systems Enterprise Architecture – GPFS Solution
88
Update Hosts File
IBM Systems Enterprise Architecture – GPFS Solution
89
Add c5 node what is windows gpfs cluster client node Installation GPFS v3.3 Base Pacakge
IBM Systems Enterprise Architecture – GPFS Solution
90
This install is initial pakage. - Complete Base Pkg Install system reboot Uninstall Base Pkg - Installation Latest Version
IBM Systems Enterprise Architecture – GPFS Solution
91
Control Panel
Generate SSH Key and Share on Windows Korn Shell. After make id_rsa.pub file then you must update Authorized_keys on AIX Server, This file must sync all of the gpfs cluster nodes.
IBM Systems Enterprise Architecture – GPFS Solution
92
Generate SSH Key Copy key file to other node Open Key file. Copy Key Info
IBM Systems Enterprise Architecture – GPFS Solution
93
Paste key info Test Remote Shell Accept GPFS License
- mmchlicense server --accept -N [windows server hostname] on Other GPFS Server
IBM Systems Enterprise Architecture – GPFS Solution
94
Mapping Volume Name
- mmchfs device -t [windows Drive Letter]
IBM Systems Enterprise Architecture – GPFS Solution
95
Check Drive Letter Mounted Volume
Mount All of the GPFS Volume
The limitation of GPFS v3.3 support for windows server 2008
GPFS v3.3 multicluster configurations that include Windows clients should not upgrade Windows machines with 3.3.0-3 or -4. You must install 3.3.0.-5 |when upgrading beyond 3.3.0-2 due to an issue with OpenSSL |introduced in 3.3.0-3. Go to Download Update pakage http://www14.software.ibm.com/webapp/set2/sas/f/gpfs/home.html
Windows nodes do not support directly accessing disks or operating as an NSD server. |This function is covered in the GPFS documentation |for planning purposes only. This FAQ will be updated with the tested |disk device support information when it is generally available.
Support for Windows Server 2008 R2 is not yet available. This FAQ will be updated when that support is available. (GPFS v3.4 Plan)
There is no migration path from Windows Server 2003 R2 (GPFS V3.2.1-5 or later) to Windows Server 2008 SP2 (GPFS V3.3).
IBM Systems Enterprise Architecture – GPFS Solution
96
To move GPFS V3.2.1.5 or later nodes Windows nodes to GPFS V3.3:
1. Remove all the Windows nodes from your cluster. 2. Uninstall GPFS 3.2.1.5 from your Windows nodes. This step is not necessary if you are
reinstalling Windows Server 2008 from scratch (next step below) and not upgrading from Server 2003 R2.
3. Install Windows Server 2008 and the required prerequisites on the nodes. 4. Install GPFS 3.3 on the Windows Server 2008 nodes. 5. Migrate your AIX and Linux nodes from GPFS 3.2.1-5 or later, to GPFS V3.3. 6. Add the Windows nodes back to your cluster.
User exits defined by the mmaddcallback command and the three specialized user exits provided by GPFS are not currently supported on Windows nodes.
The following GPFS commands are not supported on Windows:
The Tivoli® Storage Manager (TSM) Backup Archive client for Windows does not
support unique features of GPFS file systems. TSM backup and archiving
operations are supported on AIX and Linux nodes in a cluster that contains
Windows. For information on TSM backup archive client support for GPFS, see:
The GPFS Application Programming Interfaces (APIs) are not supported on Windows.
The native Windows backup utility is not supported.
Symbolic links that are created on UNIX-based nodes are specially handled by GPFS Windows nodes; they appear as regular files with a size of 0 and their contents cannot be accessed or modified.
GPFS on Windows nodes attempts to preserve data integrity between memory-mapped I/O and other forms of I/O on the same computation node. However, if the same file is memory mapped on more than one Windows node, data coherency is not guaranteed between the memory-mapped sections on these multiple nodes. In other words, GPFS on Windows does not provide distributed shared memory semantics. Therefore, applications that require data coherency between memory-mapped files on more than one node might not function as expected.
IBM Systems Enterprise Architecture – GPFS Solution
97
16. Rolling Upgrade to v3.3 from v3.2
Check GPFS Cluster Configuration Status
Umount File System
Shutdown GPFS Daemon
Upgrage Base Pacakge
IBM Systems Enterprise Architecture – GPFS Solution
98
Check Installation Status
Install Update Pkg
Compile Portable Layer
IBM Systems Enterprise Architecture – GPFS Solution
99
Install GPFS Module
This entire step is same on each node, that is not require Shutdown all of the GPFS Cluster File Service during upgrade gpfs daemon. Rolling upgrade support operation mixed gpfs version after v3.x or higher. This is very useful function for live service for customer. Each node is upgrade gpfs daemon seperately, then file system version must change latest version.
Check GPFS Configuration
This warning is need to update license accept information, you shoud fallow below command. # mmchlicense client --accept –N w1 # mmchlicense server --accept –N l1,l2
Check File System Version
IBM Systems Enterprise Architecture – GPFS Solution
100
File System Format Update Check Updated File System Version
IBM Systems Enterprise Architecture – GPFS Solution
p2:/#>mmlsdisk TEAM02_AIX -m Disk name IO performed on node Device Availability ------------ ----------------------- ----------------- ------------ gpfs1nsd localhost /dev/hdisk1 up gpfs2nsd localhost /dev/hdisk2 up gpfs3nsd localhost /dev/hdisk3 up gpfs4nsd localhost /dev/hdisk4 up gpfs5nsd localhost /dev/hdisk5 up
Check NSD for TEAM02_AIX Volume p1:/TEAM02_AIX#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ---------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 48869632 ( 93%) 1376 ( 0%) gpfs2nsd 52428800 1 yes yes 48869888 ( 93%) 1072 ( 0%) gpfs3nsd 52428800 1 yes yes 48869376 ( 93%) 4256 ( 0%) gpfs4nsd 52428800 1 yes yes 48869376 ( 93%) 4952 ( 0%) gpfs5nsd 52428800 1 yes yes 48869632 ( 93%) 4840 ( 0%) ------------- -------------------- ------------------- (pool total) 262144000 244347904 ( 93%) 16496 ( 0%) ============= ==================== =================== (total) 262144000 244347904 ( 93%) 16496 ( 0%)
IBM Systems Enterprise Architecture – GPFS Solution
102
Inode Information ----------------- Number of used inodes: 4069 Number of free inodes: 254491 Number of allocated inodes: 258560 Maximum number of inodes: 258560 p1:/TEAM02_AIX#>
Check File System Usage
p1:/tmp/gpfs#>cat disk.desc hdisk6:p1,p2::dataAndMetadata:1: p1:/tmp/gpfs#>mmcrnsd -F /tmp/gpfs/disk.desc mmcrnsd: Processing disk hdisk6 mmcrnsd: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
p1:/tmp/gpfs#>mmadddisk TEAM02_AIX -F /tmp/gpfs/disk.desc GPFS: 6027-531 The following disks of TEAM02_AIX will be formatted on node p2: gpfs9nsd: size 52428800 KB Extending Allocation Map Checking Allocation Map for storage pool 'system' 20 % complete on Wed Oct 28 10:51:42 2009 39 % complete on Wed Oct 28 10:51:47 2009 59 % complete on Wed Oct 28 10:51:52 2009 78 % complete on Wed Oct 28 10:51:58 2009 98 % complete on Wed Oct 28 10:52:03 2009 100 % complete on Wed Oct 28 10:52:03 2009 GPFS: 6027-1503 Completed adding disks to file system TEAM02_AIX. mmadddisk: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. p1:/tmp/gpfs#>
Add New NSD to TEAM02_AIX Volume p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX -M Disk name IO performed on node Device Availability ------------ ----------------------- ----------------- ------------ gpfs1nsd localhost /dev/hdisk1 up gpfs2nsd localhost /dev/hdisk2 up gpfs3nsd localhost /dev/hdisk3 up gpfs4nsd localhost /dev/hdisk4 up gpfs5nsd localhost /dev/hdisk5 up gpfs9nsd localhost /dev/hdisk6 up p1:/tmp/gpfs#> Check Added NSD on Configured Volume
IBM Systems Enterprise Architecture – GPFS Solution
103
p1:/tmp/gpfs#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 44486912 ( 85%) 6680 ( 0%) gpfs2nsd 52428800 1 yes yes 44487936 ( 85%) 6368 ( 0%) gpfs3nsd 52428800 1 yes yes 44487680 ( 85%) 10216 ( 0%) gpfs4nsd 52428800 1 yes yes 44487936 ( 85%) 11304 ( 0%) gpfs5nsd 52428800 1 yes yes 44488960 ( 85%) 7688 ( 0%) gpfs9nsd 52428800 1 yes yes 51213056 ( 98%) 376 ( 0%) ------------- -------------------- ------------------- (pool total) 314572800 273652480 ( 87%) 42632 ( 0%) ============= ==================== =================== (total) 314572800 273652480 ( 87%) 42632 ( 0%) Inode Information ----------------- Number of used inodes: 4082 Number of free inodes: 254478 Number of allocated inodes: 258560 Maximum number of inodes: 258560 p1:/tmp/gpfs#> Check NSD Status p1:/tmp/gpfs#>mmrestripefs TEAM02_AIX -b GPFS: 6027-589 Scanning file system metadata, phase 1 ... 3 % complete on Wed Oct 28 10:54:55 2009 7 % complete on Wed Oct 28 10:54:58 2009 9 % complete on Wed Oct 28 10:55:01 2009 13 % complete on Wed Oct 28 10:55:05 2009 16 % complete on Wed Oct 28 10:55:09 2009 20 % complete on Wed Oct 28 10:55:13 2009
78 % complete on Wed Oct 28 10:56:04 2009 82 % complete on Wed Oct 28 10:56:08 2009 86 % complete on Wed Oct 28 10:56:11 2009 90 % complete on Wed Oct 28 10:56:14 2009 93 % complete on Wed Oct 28 10:56:18 2009 97 % complete on Wed Oct 28 10:56:21 2009 100 % complete on Wed Oct 28 10:56:23 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 2 ... 1 % complete on Wed Oct 28 10:56:31 2009 34 % complete on Wed Oct 28 10:56:34 2009 59 % complete on Wed Oct 28 10:56:37 2009 95 % complete on Wed Oct 28 10:56:41 2009 100 % complete on Wed Oct 28 10:56:41 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 3 ... 31 % complete on Wed Oct 28 10:56:46 2009 59 % complete on Wed Oct 28 10:56:50 2009 87 % complete on Wed Oct 28 10:56:54 2009 100 % complete on Wed Oct 28 10:56:55 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-589 Scanning file system metadata, phase 4 ... GPFS: 6027-552 Scan completed successfully. GPFS: 6027-565 Scanning user file metadata ... GPFS: 6027-565 Scanning user file metadata ... 99 % complete on Tue Oct 27 21:25:37 2009 100 % complete on Tue Oct 27 21:42:01 2009 GPFS: 6027-552 Scan completed successfully.
This command is Volume resripe command for New NSD.
IBM Systems Enterprise Architecture – GPFS Solution
104
p1:/tmp/gpfs#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 29172480 ( 56%) 10984 ( 0%) gpfs2nsd 52428800 1 yes yes 29171200 ( 56%) 12704 ( 0%) gpfs3nsd 52428800 1 yes yes 29164544 ( 56%) 15296 ( 0%) gpfs4nsd 52428800 1 yes yes 29163008 ( 56%) 15352 ( 0%) gpfs5nsd 52428800 1 yes yes 29160960 ( 56%) 10720 ( 0%) gpfs9nsd 52428800 1 yes yes 29792256 ( 57%) 6592 ( 0%) ------------- -------------------- ------------------- (pool total) 314572800 175624448 ( 56%) 71648 ( 0%) ============= ==================== =================== (total) 314572800 175624448 ( 56%) 71648 ( 0%) Inode Information ----------------- Number of used inodes: 4096 Number of free inodes: 254464 Number of allocated inodes: 258560 Maximum number of inodes: 258560 p1:/tmp/gpfs#>
Remove NSD For remove gpfs1nsd, you must set to suspend for blocking disk IO. This command is mmchdisk.
Ex) command : mmchdisk TEAM02_AIX suspend –d gpfs1nsd Suspend option is change read only on assigned NSD. After check change suspend mode on NSD, use restripe command with –r option.
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ gpfs1nsd nsd 512 1 yes yes suspended up system gpfs2nsd nsd 512 1 yes yes ready up system gpfs3nsd nsd 512 1 yes yes ready up system gpfs4nsd nsd 512 1 yes yes ready up system gpfs5nsd nsd 512 1 yes yes ready up system gpfs9nsd nsd 512 1 yes yes ready up system GPFS: 6027-741 Attention: Due to an earlier configuration change the file system may contain data that is at risk of being lost. p1:/tmp/gpfs#>
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ gpfs1nsd nsd 512 1 yes yes suspended down system gpfs2nsd nsd 512 1 yes yes ready up system gpfs3nsd nsd 512 1 yes yes ready up system gpfs4nsd nsd 512 1 yes yes ready up system gpfs5nsd nsd 512 1 yes yes ready up system gpfs9nsd nsd 512 1 yes yes ready up system GPFS: 6027-739 Attention: Due to an earlier configuration change the file system is no longer properly balanced.
Check Shutdown gpfs1nsd p1:/tmp/gpfs#>mmdf TEAM02_AIX disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 281 GB) gpfs1nsd 52428800 1 yes yes 52359936 (100%) 248 ( 0%) gpfs2nsd 52428800 1 yes yes 24581376 ( 47%) 15920 ( 0%) gpfs3nsd 52428800 1 yes yes 24487168 ( 47%) 16696 ( 0%) gpfs4nsd 52428800 1 yes yes 24495104 ( 47%) 17984 ( 0%) gpfs5nsd 52428800 1 yes yes 24557824 ( 47%) 14816 ( 0%) gpfs9nsd 52428800 1 yes yes 25102592 ( 48%) 11360 ( 0%) ------------- -------------------- ------------------- (pool total) 262144000 123224064 ( 47%) 76776 ( 0%) ============= ==================== =================== (total) 262144000 123224064 ( 47%) 76776 ( 0%) Inode Information ----------------- Number of used inodes: 4096 Number of free inodes: 254464 Number of allocated inodes: 258560 Maximum number of inodes: 258560
IBM Systems Enterprise Architecture – GPFS Solution
106
GPFS: 6027-565 Scanning user file metadata ... 100 % complete on Tue Oct 27 22:08:19 2009 GPFS: 6027-552 Scan completed successfully. GPFS: 6027-379 Could not invalidate disk(s). Checking Allocation Map for storage pool 'system' GPFS: 6027-370 tsdeldisk64 completed. mmdeldisk: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
Remove NSD Disk
p1:/tmp/gpfs#>mmlsdisk TEAM02_AIX disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ gpfs2nsd nsd 512 1 yes yes ready up system gpfs3nsd nsd 512 1 yes yes ready up system gpfs4nsd nsd 512 1 yes yes ready up system gpfs5nsd nsd 512 1 yes yes ready up system gpfs9nsd nsd 512 1 yes yes ready up system GPFS: 6027-739 Attention: Due to an earlier configuration change the file system is no longer properly balanced.
IBM Systems Enterprise Architecture – GPFS Solution
111
19. Failure group and GPFS Replication
Do you want to use replication on GPFS? Then makes file system, Ready NSD Configuration for replication of file system. This algorithem is write block IO based on NSD failure group. # hdisk1:c1:c2:dataAndMetadata:1:team3_aix_nsd1 team3_aix_nsd1:::dataAndMetadata:1::
This is configuration for NSD Replication. Make a file system, and then copy a single file, that size 7GB. This file system use 14GB (450 – 14 = 436), why is write 2 times.
IBM Systems Enterprise Architecture – GPFS Solution
112
Check current NSD configuration and disk usage Remove second failure group Check Remove Status
IBM Systems Enterprise Architecture – GPFS Solution
113
Do not use rebalance command, it is just add new NSD, then changed failure group information of File system. For configure replication, minimum configuration is 3 set of storage box or 2 set of storage box with disk discriptor function, what is similar tiebreaker option.
Consideration Failover time on GPFS Cluster when is IO Service. HBA Driver option such as timeout value or failure recognition time Multipath driver option and configuration such as RDAC or MPIO Storage Cache Policy must be disable for Volume Integrity. Cableing Design with SAN Switch and AVT Funcition with Storage Box. All of the component driver module must be latest.
IBM Systems Enterprise Architecture – GPFS Solution
114
20. End of This bp residency
Education System
Maybe next time of GPFS residency program topic will be GPFS/ILM Solution with IBM Tivoli Product. This diagram is integrated GPFS/TSM Architecture.
IBM Systems Enterprise Architecture – GPFS Solution
115
Trademarks IBM, the IBM Logo, BladeCenter, DS4000, eServer, and System x are trademarks of International Business Machines Corporation in the United States, other countries, or both. For a complete list of IBM Trademarks, see http://www.ibm.com/legal/copytrade.shtml. Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.