pureScale 014 revision 2 - DUGIdugi.molaro.be/wp-content/uploads/2009/10/pureScale_DB2_RUG_site… · – Limited to AIX and IBM ... – single command addition or removal of cluster’s
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. WHILE EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS PRESENTATION, IT IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. IN ADDITION, THIS INFORMATION IS BASED ON IBM’S CURRENT PRODUCT PLANS AND STRATEGY, WHICH ARE SUBJECT TO CHANGE BY IBM WITHOUT NOTICE. THE AUTHOR SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION OR ANY OTHER DOCUMENTATION
The content of this presentation is based on information provided by IBM to the general public or trough the IBM Information Champion program ! there is no guaranty on the correctness of the contents or the comments expressed during this presentation My opinions are mine
At the date of this presentation, pureScale is not GA Initial release December 2009 – Limited to AIX and IBM power hardware – Some features not supported (XML, MDC…) – Focus on OLTP and ERP – Tools integration, PE: 1st or 2nd quarter 2010 – Linux may be supported in the future
A lot of efforts were done in order to isolate the DBA from the implementation technical details
– powerHA pureScale policies are predefined, no DBA intervention required
– single command addition or removal of cluster’s members
Easy migration: does not require data movement – 2 paths: GPFS and NOT GPFS
Not exactly the same as z/OS data sharing – Sysplex timer functionalities are software implemented – Cannot run a member in different DB2 versions – pureScale is NOT a replacement of z/OS data sharing!
Architected for no single point of failure – Automatic workload balancing – Duplexed global lock and memory manager – Tivoli System Automation automatically handles all
component failures ! DB2 pureScale stays up even with multiple node failures
– Shared disk failure handled using disk replication technology
Capacity – DB2 pureScale has been designed to grow with
business requirements – Flexible licensing designed for minimizing costs of
peak times – Only pay for additional capacity when you use it even
GPFS: General Parallel File System Provides file system services to parallel and serial
applications running on multiple nodes Allows parallel applications simultaneous access to the same
files, or different files, from any node that has the GPFS file system Each node that has a GPFS file system mounted must be
able to communicate with all storage devices that are part of this file system Requisite for DB2 pureScale Installation does configure but doesn’t install GPFS 2 migration paths based on current GPFS use or not More information:
CS: DB2 Cluster Services Integrated DB2 component Single install as part of DB2 installation Upgrades and maintenance through DB2
fixpacks DB2 Cluster Services:
– Reliable Scalable Cluster Technology – Tivoli Systems Automation for Multi-Platforms – IBM General Parallel File System – DB2 CS tightly integrates these IBM products into
DB2 pureScale – DB2 instance creation creates RSCT and GPFS
domains across hosts – Single command used to add hosts to the instance: db2iupdt –add -m newhost.toto.be db2inst1
– Install includes DB2, PowerHA pureScale and DB2 Cluster Services
Cluster Manager (RSCT) Cluster Automation (Tivoli SA MP)
Runtime load information used to balance load across members – Load information of all members kept on each member – Information sent to the client regularly – Support for transaction level routing for selected SQL
Failover: load of failed member evenly distributed to other members Fallback: once the failed member is back ! inverse process
Optional affinity to host: set via client configuration
– DBA verifies pre-requisites: AIX, hosts on the network, access to shared disks enabled, etc
– DBA copies the pureScale install image to the Install Initiating Host – DB2 installs the code on the specified hosts using a response file – DB2 creates the instance, members and CFs as directed via GUI
– DB2 adds all members, CFs, hosts, HCA cards, etc. to the domain resources – DB2 creates the cluster file system and sets up each member’s access to it
Add a member – DBA verifies pre-requisite for new host – DBA adds the member
db2iupdt –add –m <MemHostName> InstName
– DB2 does all tasks to add the member to the cluster: • Copies the image and response file to new member
• Runs install • Adds new member to the resources for the instance • Sets up access to the cluster file system for new member
Support of PE expected for 1st / 2nd quarter 2010 Commands:
> db2start 12/13/2009 09:52:59 0 0 SQL1063N DB2START processing was successful. 12/13/2009 09:53:00 1 0 SQL1063N DB2START processing was successful. 12/13/2009 09:53:01 2 0 SQL1063N DB2START processing was successful. 12/13/2008 09:53:01 3 0 SQL1063N DB2START processing was successful. SQL1063N DB2START processing was successful.
> db2instance -list ID TYPE STATE HOME_HOST CURRENT_HOST ALERT 0 MEMBER STARTED host0 host0 NO 1 MEMBER STARTED host1 host1 NO 2 MEMBER STARTED host2 host2 NO 3 MEMBER STARTED host3 host3 NO 4 CF PRIMARY host4 host4 NO 5 CF PEER host5 host5 NO