Top Banner
Upgrade11gR2 to12cR1 Clusterware Presenter : Nikhil Kumar
40
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Upgrade 11gR2 to 12cR1 Clusterware

Upgrade11gR2 to12cR1 Clusterware

Presenter : Nikhil Kumar

Page 2: Upgrade 11gR2 to 12cR1 Clusterware

2

WHO AM I?Nikhil Kumar (DBA Manager)

6 Years of Experience in Oracle Databases and Apps.

Oracle Certified Professional Oracle 9i and 11g.

Worked on Mission critical Databases (Telecom, Financial ERP, Manufacturing and Government Domain).

Member and Speaker of AIOUG-North India Chapter (http://www.aioug.org/aiougnichapter.php)

Contact me at [email protected]

http://nikhildbasavvy.wordpress.com/

https://www.facebook.com/groups/OracleTechSavvy/

Twitter: @nikhil0028us

http://www.slideshare.net/nikhil0028us

in.linkedin.com/pub/nikhil-kumar/1b/a74/350/

Page 3: Upgrade 11gR2 to 12cR1 Clusterware

3

AGENDA

Introduction of Clusterware

Pros and Cons of Upgrade

Prerequisite

Traditional cluster Vs Flex Cluster

Clusterware Upgrade

Recovering from rootupgrade.sh failure

Clusterware Downgrade

Tips to monitor and improve the RAC environment

Page 4: Upgrade 11gR2 to 12cR1 Clusterware

Oracle Clusterware (Platform on Platform)

Oracle clusterware is infrastructure which provides platform to Oracle database run on shared mode (Active- Active).

Oracle Clusterware is acting as platform on OS platform to provide database availability in shared mode.

Page 5: Upgrade 11gR2 to 12cR1 Clusterware

5

Traditional Cluster Vs Flex Cluster

Page 6: Upgrade 11gR2 to 12cR1 Clusterware

6

Pros and Cons of Upgrade

Pros(Requirement)

New Features and components

Support from Oracle

Bug fixes

Sometimes requirement of advance application

Database upgrade requirement (Database version cannot be greater than Clusterware version).

Cons

Risk of failure during upgrade

Risk of long outage in 24/7 applications

Lack of skills to newer version

Legacy Application

Page 7: Upgrade 11gR2 to 12cR1 Clusterware

7

Prerequisite

Backup of existing Clusterware and OCR

Sufficient Space in installation mountpoint

GRID Home directory (Out of place Upgrade)

Unset Oracle Variable(ORACLE_HOME,ORACLE_BASE and TNS_ADMIN etc)

OS Groups

Kernel Parameters and Packages

Root or sudo user access

11.2.0.2.3 minimum clusterware version required to upgrade on 12.1.0.1

Page 8: Upgrade 11gR2 to 12cR1 Clusterware

8

Cluster Overview

Two Node cluster

Operating System version RHEL 6.4

Cluster and database software version 11.2.0.4.0

Cluster Name: NIOUG

SCAN: racnode.linuxdc.com

Raw Disk size -- 10 Luns

Diskgroups (OCR)

Page 9: Upgrade 11gR2 to 12cR1 Clusterware

Cluster Overview

Page 10: Upgrade 11gR2 to 12cR1 Clusterware

10

Upgradion of clusterware from 11.2.0.4 to 12.1.0.1

Page 11: Upgrade 11gR2 to 12cR1 Clusterware

11

Select Installation Option

Page 12: Upgrade 11gR2 to 12cR1 Clusterware

12

Cluster Overview

Page 13: Upgrade 11gR2 to 12cR1 Clusterware

13

Grid Infrastructure Node Selection

Page 14: Upgrade 11gR2 to 12cR1 Clusterware

14

Grid Infrastructure Management Repository Option

Page 15: Upgrade 11gR2 to 12cR1 Clusterware

15

Grid Management database MGMTDB is new database instance which is used for storing Cluster Health Monitor (CHM) data.

Instance is created on first node only.

Get failover to surviving node if existing node crash.

Databases files for repository database are stored in same location as OCR/Voting disk

[oracle@racnode1 ~]$ srvctl config mgmtdb

$srvctl status mgmtdb

Database is enabled Instance -MGMTDB is running on node racnode1

[oracle@racnode1 ~]$ oclumon manage -get MASTER

Master = racnode1

[oracle@racnode1 ~]$ oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/DATAFILE/sysmgmtdata.259.85381046

[oracle@racnode1 ~]$ oclumon version

Cluster Health Monitor (OS), Version 12.1.0.1.0 - Production Copyright 2007, 2013 Oracle. All rights reserved.

For command reference

http://docs.oracle.com/database/121/CWADD/troubleshoot.htm#CWADD92340

Page 16: Upgrade 11gR2 to 12cR1 Clusterware

16

Group Information

Page 17: Upgrade 11gR2 to 12cR1 Clusterware

17

Installation Location

Page 18: Upgrade 11gR2 to 12cR1 Clusterware

18

Root Script Executionecution

Page 19: Upgrade 11gR2 to 12cR1 Clusterware

19

Root Script Execution

Page 20: Upgrade 11gR2 to 12cR1 Clusterware

20

Select batches for nodes

Page 21: Upgrade 11gR2 to 12cR1 Clusterware

21

Summary of configuration

Page 22: Upgrade 11gR2 to 12cR1 Clusterware

22

Upgradation in process

Page 23: Upgrade 11gR2 to 12cR1 Clusterware

23

Upgradation in process

Page 24: Upgrade 11gR2 to 12cR1 Clusterware

24

Running rootupgrade.sh in background

Page 25: Upgrade 11gR2 to 12cR1 Clusterware

25

Click “Continue” to run rootupgrade.sh on first node

Page 26: Upgrade 11gR2 to 12cR1 Clusterware

26

rootupgrade.sh on first node

Page 27: Upgrade 11gR2 to 12cR1 Clusterware

27

Press “Ok” to run rootupgrade.sh on second node

Page 28: Upgrade 11gR2 to 12cR1 Clusterware

28

Recovering from rootupgrade.sh failure

Changing the First Node for Upgrade

If the first node becomes inaccessible, you can force another node to be the first node for installation or upgrade. run the following command on another node using the -force option:

rootupgrade.sh -force -first

 

Problems before rootupgrade.sh failure : MOS ID1056322.1

Problem running rootupgrade.sh : MOS ID 1364947.1

Page 29: Upgrade 11gR2 to 12cR1 Clusterware

29

Upgradation in-process

Page 30: Upgrade 11gR2 to 12cR1 Clusterware

30

Management database creation issue

Page 31: Upgrade 11gR2 to 12cR1 Clusterware

31

Upgradation Complete

Page 32: Upgrade 11gR2 to 12cR1 Clusterware

32

Checking Clusterware Services

ps –ef| grep d.bin

crsctl stat res –t

Page 33: Upgrade 11gR2 to 12cR1 Clusterware

33

Downgrade Clusterware from 12.1.0.1 to 11.2.0.4

Page 34: Upgrade 11gR2 to 12cR1 Clusterware

34

Downgrade Clusterware from 12.1.0.1 to 11.2.0.4

1.On all remote nodes, use the command

# /u01/app/12.1.0/grid/crs/install/rootcrs.sh –downgrade (This command will shutdown the Grid clusterware stack and restore the old Clusterware home files)

2. After the rootcrs.sh -downgrade script has completed on all remote nodes, on the local node use the command

# /u01/app/12.1.0/grid/crs/install/rootcrs.sh -downgrade –lastnode ( This command will restore the old OCR file)

This script downgrades the OCR. If you want to stop a partial or failed Oracle Grid Infrastructure 12c Release 1 (12.1) installation and restore the previous release Oracle Clusterware, then use the -force flag with this command.

Update the registry:-

$ cd /u01/app/12.1.0/grid/oui/bin ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid

$ cd /u01/app/12.1.0/grid/oui/bin $ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs

Start the clusterware from old clusterware home on all nodes:-

#/u01/app/11.2.0/grid/bin/crsctl start crs

Page 35: Upgrade 11gR2 to 12cR1 Clusterware

35

Downgrade Clusterware from 12.1.0.1 to 11.2.0.4

Page 36: Upgrade 11gR2 to 12cR1 Clusterware

36

Downgrade…Behind the scene

014-07-24 18:25:02: Successfully execute 'acfsroot install' from older home

2014-07-24 18:25:02: Restore init files

2014-07-24 18:25:02: restore init scripts

2014-07-24 18:25:02: copy "/u01/app/11.2.0/grid/crs/init/init.ohasd" => "/etc/init.d/init.ohasd"

2014-07-24 18:25:02: copy "/u01/app/11.2.0/grid/crs/init/ohasd" => "/etc/init.d/ohasd"

2014-07-24 18:25:02: leftVersion=11.2.0.4.0; rightVersion=11.2.0.3.0

2014-07-24 18:25:02: [11.2.0.4.0] is higher than [11.2.0.3.0]

2014-07-24 18:25:02: Remove all new version related stuff from /etc/oratab

2014-07-24 18:25:02: Copying file /etc/oratab.new.racnode1 to /etc/oratab

2014-07-24 18:25:02: copy "/etc/oratab.new.racnode1" => "/etc/oratab"

2014-07-24 18:25:02: Removing file /etc/oratab.new.racnode1

2014-07-24 18:25:02: Removing file /etc/oratab.new.racnode1

2014-07-24 18:25:02: Successfully removed file: /etc/oratab.new.racnode1

2014-07-24 18:25:02: Removing the checkpoint file /u01/app/grid/crsdata/racnode1/crsconfig/ckptGridHA_racnode1.xml

2014-07-24 18:25:02: Removing file /u01/app/grid/crsdata/racnode1/crsconfig/ckptGridHA_racnode1.xml

2014-07-24 18:25:02: Successfully removed file: /u01/app/grid/crsdata/racnode1/crsconfig/ckptGridHA_racnode1.xml

2014-07-24 18:25:02: Successfully downgraded Oracle Clusterware stack on this node

Page 37: Upgrade 11gR2 to 12cR1 Clusterware

37

Be Careful.. While clusterware downgrade

Running rootupgrade.sh in wrong sequence during downgrade of cluster could lead to below error.

Page 38: Upgrade 11gR2 to 12cR1 Clusterware

38

Tips to monitor and improve the Cluster environment.There are so many tool and which can improve your configuration of RAC.

1.ORAchk Tool: Proactively scans for the most impactful problems across the various layers of your stack. It perform check on OS Level, Cluster level, database level and Network Level and provide solution according to that.

Also refer ORAchk - Oracle Configuration Audit Tool (Doc ID 1268927.2)

2.OSWatcher and/or CHM Cluster health monitor :- These tools are used to monitor the operating and clusterware level process and record them in log according to your retention policy.

Also refer OSWatcher (Includes: [Video]) (Doc ID 301137.1)

3. RAC :- RAC comes with few more AWR report functionality.

awrgrpt:- It gives you the detail information of all nodes of the cluster in single report.

awrgdrpt:- Its gives you the difference report b/w 2 awr reports to deal with performance issue.

.

Page 39: Upgrade 11gR2 to 12cR1 Clusterware

39

Page 40: Upgrade 11gR2 to 12cR1 Clusterware

40