Top Banner

of 21

Grid Control Playbook GDMS

May 30, 2018

Download

Documents

Rikesh Dewangan
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/14/2019 Grid Control Playbook GDMS

    1/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    Adding a Node To 10gR2 RAC cluster

    Rikesh Dewangan

    Database Administrator

    Summary:=============================================================

    This presentation will provide detail steps for Oracle DBA and Linux Engineer to add

    new node to existing 10gR2 (10.2.0.3) database RAC.

    The most critical steps that need to be followed are:

    Pre-install checking

    Adding an Oracle clusterware home to new nodes using OUI in interactive node

    Adding an Oracle home to new nodes using OUI in interactive mode

    Reconfigure listener on new node using NETCA

    Adding ASM instance to new nodes manually

    1

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    2/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    Part I: Pre-install checking

    1. Verify cluster healthy on existing nodes (ocrcheck, cluvfy)

    2. Check OS version, Kernel parameters, /etc/hosts file and ensure they are identical onall nodes

    3. Cross check ssh login on all nodes

    4. Cross ping hostname and hostname-pn on all nodes

    5. Copy .profile from existing node to new node

    6. Ensure ocr, css files and shared ASM disk are visible from new node and have rightpermission

    7. Backup crs, ocr and oraInventory on existing nodes

    8. Change permission on $ORA_CRS_HOME/bin/vipca.orig and$ORA_CRS_HOME/bin/srvctl.orig on existing node

    change permission from

    $ ls -lrat *.orig

    -rwxr-x--x 1 root users 5380 Feb 27 2007 vipca.orig

    -rwxr-x--x 1 root users 5800 Feb 27 2007 srvctl.orig

    to

    $ls -lrat *.orig

    -rwxr-x--x 1 oracle oinstall 5380 2007-02-27 18:47 vipca.orig

    -rwxr-x--x 1 oracle oinstall 5800 2007-02-27 18:47 srvctl.orig

    9. Comment out two lines below in $ORA_CRS_HOME/bin/srvctl on existing nodes

    #LD_ASSUME_KERNEL=2.4.19

    #export LD_ASSUME_KERNEL

    10. Add "unset LD_ASSUME_KERNEL" in $ORA_CRS_HOME/bin/vipca line 126

    Step9 and 10 are 10.2.0.3 specific. The issue is fixed on 10.2.0.4. Pleasereference Oracle metalink Doc ID: Note:414163.1 -- 10gR2 RAC Installissues on Oracle EL5 or RHEL5 or SLES10 (VIPCA Failures)

    11. Ensure $ORACLE_BASE directory exist on new node with correct permission

    2

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    3/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    Part II: Adding an Oracle Clusterware Home to a New Node Using OUI in

    Interactive Mode

    =============================================================Ensure that you have successfully installed Oracle Clusterware on at least one node in yourcluster environment. To use these procedures as shown, your $ORA_CRS_HOMEenvironment variable must identify your successfully installed Oracle Clusterware home.

    1. Set the DISPLAY environment variable and run theaddNode.sh script from existing node (node1)

    . ./ .profile

    DISPLAY=ipaddress:0.0; export DISPLAY

    cd $ORA_CRS_HOME/oui/bin

    ./addNode.sh

    3

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    4/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    4

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    5/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    2.The Oracle Universal Installer (OUI) displays the Node Selection Page

    3. Enter the node that you want to add and verify the entries that OUI displays on the Summary Page clickNext.

    5

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    6/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    4. Monitor the progress of the copy crs home to new node and verify the total size of the CRSdirectory

    6

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    7/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    5. Execute configuration scripts by root users

    7

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    8/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    a. Run the orainstRoot.sh script on the new node if OUI prompts you to do so.

    ausdfsgriddb10:/u01/app/oracle/oraInventory # ./orainstRoot.sh

    Changing permissions of /u01/app/oracle/oraInventory to 770.

    Changing groupname of /u01/app/oracle/oraInventory to oinstall.

    The execution of the script is complete

    b. Run the rootaddNode.sh script from the $ORA_CRS_HOME/install/ directory on the node from whichyou are running OUI.

    ausdfsgriddb01:/u01/app/oracle/product/10.2.0/crs_1/install # ./rootaddnode.sh

    clscfg: EXISTING configuration version 3 detected.

    8

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    9/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    clscfg: version 3 is 10G Release 2.

    Attempting to add 1 new nodes to the configuration

    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

    node :

    node 10: ausdfsgriddb10 ausdfsgriddb10-pn ausdfsgriddb10

    Creating OCR keys for user 'root', privgrp 'root'..

    Operation successful.

    /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl add nodeapps -n ausdfsgriddb10 -Aausdfsgriddb10-vip.us.dell.com/255.255.240.0/eth0 -o/u01/app/oracle/product/10.2.0/crs_1

    c. Run the root.sh script on the new node from $ORA_CRS_HOME to start Oracle Clusterware on the newnode.

    ausdfsgriddb10:/u01/app/oracle/product/10.2.0/crs_1 # ./root.sh

    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root

    WARNING: directory '/u01/app/oracle/product' is not owned by root

    WARNING: directory '/u01/app/oracle' is not owned by root

    Checking to see if Oracle CRS stack is already configured

    /etc/oracle does not exist. Creating it now.

    OCR LOCATIONS = /u02/oradata/ocr1,/u02/oradata/ocr2

    OCR backup directory '/u01/app/oracle/product/10.2.0/crs_1/cdata/dfsddeamercrs' does not exist. Creatingnow

    Setting the permissions on OCR backup directory

    Setting up NS directories

    Oracle Cluster Registry configuration upgraded successfully

    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root

    WARNING: directory '/u01/app/oracle/product' is not owned by root

    WARNING: directory '/u01/app/oracle' is not owned by root

    clscfg: EXISTING configuration version 3 detected.

    clscfg: version 3 is 10G Release 2.

    Successfully accumulated necessary OCR keys.

    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

    node :

    node 1: ausdfsgriddb01 ausdfsgriddb01-pn ausdfsgriddb01

    node 2: ausdfsgriddb02 ausdfsgriddb02-pn ausdfsgriddb02

    node 3: ausdfsgriddb03 ausdfsgriddb03-pn ausdfsgriddb03

    9

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    10/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    node 4: ausdfsgriddb04 ausdfsgriddb04-pn ausdfsgriddb04

    node 5: ausdfsgriddb05 ausdfsgriddb05-pn ausdfsgriddb05

    node 6: ausdfsgriddb06 ausdfsgriddb06-pn ausdfsgriddb06

    node 7: ausdfsgriddb07 ausdfsgriddb07-pn ausdfsgriddb07

    node 8: ausdfsgriddb08 ausdfsgriddb08-pn ausdfsgriddb08

    clscfg: Arguments check out successfully.

    NO KEYS WERE WRITTEN. Supply -force parameter to override.

    -force is destructive and will destroy any previous cluster

    configuration.

    Oracle Cluster Registry for cluster has already been initialized

    Startup will be queued to init within 30 seconds.

    Adding daemons to inittab

    Expecting the CRS daemons to be up within 600 seconds.

    CSS is active on these nodes.

    ausdfsgriddb01

    ausdfsgriddb02

    ausdfsgriddb03

    ausdfsgriddb04

    ausdfsgriddb05

    ausdfsgriddb06

    ausdfsgriddb07

    ausdfsgriddb08

    ausdfsgriddb09

    ausdfsgriddb10

    CSS is active on all nodes.

    Waiting for the Oracle CRSD and EVMD to start

    Oracle CRS stack installed and running under init(1M)

    Running vipca(silent) for configuring nodeapps

    Creating VIP application resource on (0) nodes.

    Creating GSD application resource on (0) nodes.

    Creating ONS application resource on (0) nodes.

    Starting VIP application resource on (8) nodes.........

    Starting GSD application resource on (8) nodes.........

    Starting ONS application resource on (8) nodes.........

    10

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    11/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    Done.

    6. Verify crs is started on new node and nodeapps are started except for listener, and then exit OUI.

    11

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    12/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    7. Obtain the remote port identifier, which you need to know for the next step, by running the following

    command on the existing node from the $ORA_CRS_HOME/opmn/conf directory:

    $ cat $ORA_CRS_HOME/opmn/conf/ons.config

    localport=6113

    remoteport=6201

    loglevel=3

    useocr=on

    9.From the $ORA_CRS_HOME/bin directory on an existing node, run the Oracle Notification Service(RACGONS) utility as in the following example where remote_port is the port number from the previous

    step and node2 is the name of the node that you are adding:./racgons add_config New_Node:Remote_Port

    Ex:

    $ ./racgons add_config ausdfsgriddb10:6201

    10. Move the Oracle created S96init.crs script to S11 and chkconfig init.crs off and then back on to ensureCRS will start properly during a reboot. Finally, reboot the node and confirm. (SE)

    12

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    13/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    mv /etc/init.d/rc3.d/S96init.crs /etc/init.d/rc3.d/S11init.crs

    chkconfig -e init.crs (Now within vi change to on and :wq)

    chkconfig -e init.crs (Now within vi change to off and :wq)

    chkconfig -e init.crs (Now within vi change to on and :wq)

    Part III: Adding an Oracle Home to a New Node Using OUI in Interactive Mode

    ================================================

    Ensure that you have successfully installed Oracle with the Oracle RAC software on at least one node inyour cluster environment. To use these procedures as shown, your $ORACLE_HOME environmentvariable must identify your successfully installed Oracle home.

    1. Go to $ORACLE_HOME/oui/bin and run the addNode.sh script on node1.

    . ./ .profile

    DISPLAY=ipaddress:0.0; export DISPLAY

    cd $ORACLE_HOME/oui/bin

    ./addNode.sh

    2. When OUI displays the Node Selection Page, select the node to be added and click Next.

    13

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    14/21

  • 8/14/2019 Grid Control Playbook GDMS

    15/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    4.Run the root.sh script on the new node from Oracle_home when OUI prompts you to do so.

    15

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    16/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    ausdfsgriddb10:/u01/app/oracle/product/10.2.0/db_1 # ./root.sh

    Running Oracle10 root.sh script...

    The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    Copying dbhome to /usr/local/bin ...

    Copying oraenv to /usr/local/bin ...

    Copying coraenv to /usr/local/bin ...

    Creating /etc/oratab file...

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    16

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    17/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    Now product-specific root actions will be performed.

    Part IV: reconfigure listener on new node

    On the new node, run the Oracle Net Configuration Assistant (NETCA) to add a Listener.

    export DISPLAY=ipaddress:0.0

    ./netca &

    17

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    18/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    18

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    19/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    19

    Dell Confidential Printed 12/18/2009Printed: 12/18/09

  • 8/14/2019 Grid Control Playbook GDMS

    20/21

  • 8/14/2019 Grid Control Playbook GDMS

    21/21

    Grid Control Playbook Version: 1.00GDMS Date: 9/11/07

    mv orapw+ASM1 orapw+ASM10

    c. create admin directories for ASM instancemkdir /u01/app/oracle/admin/+ASM/udump

    cdumphdumpbdumppfile

    d. copy init.ora to new nodecd /u01/app/oracle/admin/+ASM/pfilescp ausdfsgriddb01:/u01/app/oracle/admin/+ASM/pfile/init.ora .

    e. add new ASM instance in /u01/app/oracle/admin/+ASM/pfile/init.ora on all RAC nodes+ASM10.instance_number=10

    f. add ASM to clustersrvctl add asm -n ausdfsgriddb10 -i +ASM10 o/u01/app/oracle/product/10.2.0/db_1

    g. start asm instancesrvctl start asm -n ausdfsgriddb10

    Some steps in the document are specific for Dell Linux standard environmentsetup. It may not apply to you.

    Reference

    =================

    Oracle Metalink: Adding a Node to a 10g RAC Cluster: Note:270512.1

    Oracle Metalink: CLUSTER VERIFICATION UTILITY:Doc ID:316817.1

    Oracle metalink : Doc ID: 10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10(VIPCA Failures) Note:414163.1

    21

    Dell Confidential Printed 12/18/2009