Top Banner
Table of Contents Document Control..........................................2 Change Record............................................2 Document Authors and Contributors........................2 1. Introduction...........................................6 2. Platform and Version...................................6 3. AIX Setup............................................. 6 O/S Settings.............................................6 User Accounts............................................7 .Profile.................................................7 4. File Systems Layout...................................8 Mount Points.............................................8 OraInventory Location....................................9 ORACLE_BASE..............................................9 5. Setting up SSH keys (for RAC only)....................9 6. Host Names and Network Interface (for RAC only).....10 7. Cluster and Scan names (for RAC only)...............11 8. ASM Configuration....................................11 ASM DiskGroups..........................................11 ASM Disks...............................................12 9. Grid Infrastructure Software Installation............13 9.1. Method 1: Clone from non-RAC Grid Infrastructure Gold Image...................................................13 9.2. Method 2: Fresh Install of Grid Infrastructure.....14 Download Software Updates.............................15 Select Installation Option............................15 Select Installation Type (for RAC only)..............15 Grid Plug and Play Information (for RAC only)........15 Cluster Node Information..............................15 Specify Network Interface Usage (for RAC only).......16 Storage Option Information (for RAC only)............16 Create ASM Disk Group.................................16 Specify ASM Password..................................16
28
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Home

Table of Contents

Document Control.............................................................................................................2Change Record.................................................................................................................2Document Authors and Contributors...............................................................................2

1. Introduction....................................................................................................................62. Platform and Version....................................................................................................63. AIX Setup......................................................................................................................6

O/S Settings.....................................................................................................................6User Accounts..................................................................................................................7.Profile.............................................................................................................................7

4. File Systems Layout......................................................................................................8Mount Points....................................................................................................................8OraInventory Location.....................................................................................................9ORACLE_BASE.............................................................................................................9

5. Setting up SSH keys (for RAC only)...........................................................................96. Host Names and Network Interface (for RAC only)..............................................107. Cluster and Scan names (for RAC only).................................................................118. ASM Configuration....................................................................................................11

ASM DiskGroups..........................................................................................................11ASM Disks.....................................................................................................................12

9. Grid Infrastructure Software Installation................................................................139.1. Method 1: Clone from non-RAC Grid Infrastructure Gold Image.....................139.2. Method 2: Fresh Install of Grid Infrastructure....................................................14

Download Software Updates.....................................................................................15Select Installation Option..........................................................................................15Select Installation Type (for RAC only)...................................................................15Grid Plug and Play Information (for RAC only)......................................................15Cluster Node Information..........................................................................................15Specify Network Interface Usage (for RAC only)...................................................16Storage Option Information (for RAC only)............................................................16Create ASM Disk Group...........................................................................................16Specify ASM Password.............................................................................................16Privileged Operating System Groups.......................................................................16Specify Installation Location.....................................................................................16Create Inventory........................................................................................................17Preform Prerequisite Checks.....................................................................................17Summary....................................................................................................................17Execute Configuration scripts....................................................................................17Applying 11.2.0.3 Grid Infrastructure PSU2.............................................................17

10. Oracle Database Software Installation...................................................................19

Page 2: Home

10.1. Method 1: Clone from non-RAC Database Software Gold Image.................1910.2. Method 2: Fresh Install of Database Software................................................20

Download Software Updates.....................................................................................20Select Installation Option..........................................................................................20Grid Installation Options...........................................................................................20Select Database Edition.............................................................................................20Specify Installation Location.....................................................................................20Privileged Operating System Groups........................................................................21Perform Prerequisite Check.......................................................................................21Summary....................................................................................................................21Execute Configuration scripts....................................................................................21Applying 11.2.0.3 RDBMS PSU2.............................................................................21

11. Adjusting ASM Parameters......................................................................................2212. Create ASM Diskgroup.............................................................................................2313. TNS_ADMIN..............................................................................................................2414. Register Listener and Database to Grid Infrastructure........................................2415. Post-Install Database Setup.........................................................................................25

2

Page 3: Home

1. Introduction

This document describes the installation standard of Oracle Grid Infrastructure which is the cluster software for Oracle RAC. Given that more projects are considering migrating standalone database to Oracle RAC option, having a common standard in implementation can help a smooth implementation and better on-going support from other DBAs. The upcoming new data center is will have a significant number of new Oracle Grid Infrastructure installations. This document will serve as a blueprint for both existing and newly hired DBAs to follow in the course of Oracle Grid Infrastructure installation. Since the target audience is DBAs who are doing the software installation, the language used in this document tends to be technical and product specific.

2. Platform and Version

The latest version of Oracle Grid Infrastructure is 11.2.0.3 PSU2. It will be installed on IBM P7 LPARs running operating system AIX 6.1 TL6 SP6 with patch IV10539. The Grid Infrastructure is required not just for Oracle RAC nodes, but also for stand-alone database servers planning to use ASM for storage. Starting Oracle 11g, ASM is an integral part of Grid Infrastructure.

To verify the AIX version and patch level,$ oslevel –s6100-06-06-1140

To verify the existence of AIX patch IV10539$ instfix -ik IV10539 All filesets for IV10539 were found.

3. AIX Setup

O/S SettingsThe /tmp file system needs to have at least 1GB of free space.

The system block size allocation ncargs value needs to be at least 128K. Run the following command to verify.$ lsattr -E -l sys0 -a ncargsncargs 256 ARG/ENV list size in 4K byte blocks True

The maximum number of PROCESSES allowed per user needs to be at least 16384. Run the following command to verify.

3

Page 4: Home

$ lsattr -E -l sys0 -a maxuprocmaxuproc 16384 Maximum number of PROCESSES allowed per user True

User Accounts

The Grid Infrastructure software will be installed under the Unix account ‘oragrid’. Here is the profile of the oragrid account. You can verify it by running the id command and checking the passwd file. For consistency, make sure the user ID and group IDs are the same as listed below.

$ id oragriduid=501(oragrid) gid=501(dba) groups=502(oinstall)

$ id oracleuid=500(oracle) gid=501(dba) groups=502(oinstall)

$ grep oragrid /etc/passwdoragrid:!:501:501:Oracle GRID Administration:/u01/home/oragrid:/bin/ksh

$ grep oracle /etc/passwdoracle:!:500:501:Oracle owner:/u01/home/oracle:/bin/ksh

The AIX ulimit should be set to unlimited in all categories for users: oragrid, oracle, and root.$ ulimit -atime(seconds) unlimitedfile(blocks) unlimiteddata(kbytes) unlimitedstack(kbytes) unlimitedmemory(kbytes) unlimitedcoredump(blocks) unlimitednofiles(descriptors) unlimitedthreads(per process) unlimitedprocesses(per user) unlimited

Get the password of oragrid and oracle from the sysadmin. Change the passwords and keep it safe.

.ProfileUpdate the .profile of users oragrid and oracle to include the following lines. This is to set the command line prompt to include the username, hostname, and current directory information. Additional information can be added to the command line prompt as desired. For user oragrid, DB_NAME will be ASM, ie. ASMenv. For user oracle, DB_NAME will be the name of the primary database to be hosted.

4

Page 5: Home

export PS1="[${LOGNAME}@`hostname -s`] \$PWD $ ". $HOME/<DB_NAME>env

For example, ASMenv has following contents.export GRID_HOME=/u01/grid/11.2.0/gridexport CRS_HOME=/u01/grid/11.2.0/gridexport ASM_HOME=/u01/grid/11.2.0/gridexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=/u01/grid/11.2.0/gridexport ADR_HOME=/u01/app/oracle/diag/asm/+asm/+ASMexport TNS_ADMIN=/u01/grid/11.2.0/grid/network/adminexport PATH=$ORACLE_HOME/bin:$PATHexport ORACLE_SID=+ASM

For example, EIRS1Denv has following contents.export ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1export ADR_HOME=/u01/app/oracle/diag/rdbms/eirs1d/EIRS1Dexport TNS_ADMIN=/u01/grid/11.2.0/grid/network/adminexport PATH=$ORACLE_HOME/bin:$PATHexport ORACLE_SID=EIRS1D

4. File Systems Layout

Mount PointsThe home directory and the code trees will have separate mount points.

Mount Point

Owner Permission Size Volume Group

Description

/u01 oracle:dba 775 10 GB orabinvg Home directory of user oracle /u01/home/oracleHome directory of user oragrid /u01/home/oragrid

/u01/app oracle:dba 775 100 GB orabinvg Oracle RDBMS binary and c entralized Oracle inventory

/u01/grid oragrid:dba 775 150 GB orabinvg Oracle Grid Infrastructure binary

/u02 oracle:dba 775

5

Page 6: Home

OraInventory LocationInventory directory location should be /u01/app/oraInventory. Since this location is shared by both the Grid Infrastructure and RDBMS installations, change its permission setting to allow both oragrid and oracle users to have full privileges.

As user oragrid, create the oraInventory. This needs to be owned by oragrid; otherwise, the grid infrastructure installation or cloning will fail.$ mkdir /u01/app/oraInventory$ chmod 770 /u01/app/oraInventory

ORACLE_BASE The ORACLE_BASE will be set to /u01/app/oracle for both oragrid and oracle. In order to be shared by both oragrid and oracle, we have to set the permission of certain directories for both users.As user oracle,$ cd /u01/app$ mkdir oracle$ chmod 775 oracle$ cd /u01/app/oracle$ mkdir diag cfgtoollogs$ chmod 775 diag cfgtoollogs

5. Setting up SSH keys (for RAC only)On each RAC node, do the following for both oracle and oragrid users, taking all the defaults when prompted:

Assumption: no authorized keys file exists in $HOME/.ssh

$ /usr/bin/ssh-keygen -b 2048 -t rsa$ /usr/bin/ssh-keygen -t dsa

On the first node, do the following:

$ cd $HOME/.ssh$ cat id_rsa.pub >> authorized_keys$ cat id_dsa.pub >> authorized_keys$ /usr/bin/scp –p authorized_keys <hostname#2>:.ssh/authorized_keys

Then from each remaining node do these commands:$ cd $HOME/.ssh$ cat id_rsa.pub >> authorized_keys$ cat id_dsa.pub >> authorized_keys

Do the following on each node except the last one.

$ /usr/bin/scp –p authorized_keys <hostname#N+1>:.ssh/authorized_keys

6

Page 7: Home

From the last node, do the following:

$ /usr/bin/scp –p authorized_keys <hostname#1>:.ssh/authorized_keys $ /usr/bin/scp –p authorized_keys <hostname#2>:.ssh/authorized_keys

etc.

From every node, do the following command for ALL nodes (including the same node) to verify that the SSH keys are working without prompting for password:

$ ssh <hostname> date

6. Host Names and Network Interface (for RAC only)

Check the /etc/hosts file. This file should contains host information of all RAC nodes. For each node, it should list the host, virtual host, and dual private host entries. The contents should be identical across all RAC nodes. The host entry should have the long name (with domain) before the short name.

<IP> <host>.comp.pge.com <host><IP> <host>-vip.comp.pge.com <host>-vip<IP> <host>-priv.comp.pge.com <host>-priv1<IP> <host>-priv.comp.pge.com <host>-priv2

All the network names for this server except for <hostname>-priv should be defined in the Domain Name Server (DNS). The RAC private IP addresses (xxxxx-priv) should NOT be defined in DNS. You should be able to successfully ping all the <hostname> and <hostname>-priv<n> names. The <hostname>-vip and <scan_name> addresses are only pingable when the cluster is running.

Do the following commands to check out the network names. Repeat for each server in the cluster. For nslookup, verify that the /etc/hosts entry matches the IP address found (see example below).

$ ping <hostname>$ ping <hostname>-priv1$ ping <hostname>-priv2$ nslookup <hostname>$ nslookup <hostname>-vip$ nslookup <scan_name>$ nslookup <hostname>-priv1 (should return with “not found”)$ nslookup <hostname>-priv2 (should return with “not found”)$ nslookup <IP address of <hostname>-priv1> (should return with “not found”)$ nslookup <IP address of <hostname>-priv2> (should return with “not found”)

7

Page 8: Home

During the Grid Infrastructure installation when prompted for network interface names, use en0 for public, en1 for not being used, and en2 for private #1, en3 for private #2.

7. Cluster and Scan names (for RAC only)

During the Grid Infrastructure installation when prompted for the cluster name, use the naming convention of <app>-<type>n in lower case. Type can be ‘prod’, ‘fste’, ‘qa’, ‘dev’, ‘test’ depending on purpose. For example, if this is the second test cluster for the CC&B application, use cluster name ccb-test2.

The SCAN name should be in the format of <cluster_name>-scan.comp.pge.com. For example, ccb-test2-scan.comp.pge.com

8. ASM Configuration

ASM DiskGroupsIn the RAC configuration, all voting disks, datafiles, redo logs, archived logs, flashback logs, control files, spfile should reside on the shared ASM storage. Here is the naming convertion of the various ASM diskgroups. Note that the naming convention specifies only on the suffix of the diskgroup names. DBAs can determine the other portion of the diskgroup names according to the application and nature of the diskgroups. Some diskgroups may be shared by multiple applications. In that case, it may not be reasonable to tie the diskgroup name to the application name. We recommend to use the default allocation unit which is 1MB. Let ASM manage its directory structure. Do not manually create sub-directories in ASM diskgroups. When creating datafiles for example, just specify the file name as ‘+<diskgroup>’. ASM will assign a name and put it in the appropriate sub-directory.

Diskgroup Name ASM Redundancy

LUN size Description

*_GRID High (5 disks) 5G Voting disk, OCR*_DATA_nn External 50G, 250G, 500G, 1000G Datafiles, tempfiles,

control files, spfile*_REDO_nn External 5G, 50G Online redo logs,

control files, spfile*_FRA External 50G, 250G, 500G, 1000G Archived log,

flashback logs,

8

Page 9: Home

incremental backups*_IMAGE External 50G, 250G, 500G, 1000G RMAN image copies

ASM DisksWhen system administrators present the disks for ASM, they have to change ownership to oragrid and change the permission setting to 660.$ chown oragrid:dba /dev/rhdisk<nnn>$ chmod 660 /dev/rhdisk<nnn>

9

Page 10: Home

9. Grid Infrastructure Software Installation

9.1. Method 1: Clone from non-RAC Grid Infrastructure Gold ImageThe gold image file we will use is standalone non-RAC version of 11.2.0.3 Grid Infrastructure with PSU2.

First, copy the gold image file to /u02/software/gold_images from oragctst21: /u02/data01/dbmaint/gold_images/AIX/AIX_nonRAC_gi_binaries_11.2.0.3.2.tar.gz

As root, unzip the gold image to the GRID_HOME$ cd /u01/grid/11.2.0/grid (create directory if not exist)$ gunzip –c /u02/software/gold_images/AIX_nonRAC_gi_binaries_11.2.0.3.2.tar.gz | tar xf -

As root, run the following$ /u01/grid/11.2.0/grid/clone/rootpre.sh

As oragrid, run the Oracle cloning script$ cd /u01/grid/11.2.0/grid/clone/bin

# The following command is all on one line$ /usr/bin/perl clone.pl ORACLE_HOME="/u01/grid/11.2.0/grid" ORACLE_BASE="/u01/app/oracle" INVENTORY_LOCATION="/u01/app/oraInventory"

As root, run the following$ /u01/app/oraInventory/orainstRoot.sh$ /u01/grid/11.2.0/grid/root.sh

# The following command is all on one line$ /u01/grid/11.2.0/grid/perl/bin/perl -I/u01/grid/11.2.0/grid/perl/lib -I/u01/grid/11.2.0/grid/crs/install /u01/grid/11.2.0/grid/crs/install/roothas.pl

# Output will be like the following. Ignore the ACFS driver error.Using configuration parameter file: /u01/grid/11.2.0/grid/crs/install/crsconfig_paramsLOCAL ADD MODE Creating OCR keys for user 'oragrid', privgrp 'dba'..Operation successful.LOCAL ONLY MODE Successfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'system'..Operation successful.CRS-4664: Node eirsdboratst01 successfully pinned.Adding Clusterware entries to inittabACFS drivers installation failed

Please ignore the “ACFS drivers installation failed” warning.

10

Page 11: Home

As oragrid, configure ASM$ crsctl start resource –allCRS-5702: Resource 'ora.evmd' is already running on 'mdmdboradev01'CRS-2501: Resource 'ora.ons' is disabledCRS-2672: Attempting to start 'ora.diskmon' on 'mdmdboradev01'CRS-2672: Attempting to start 'ora.cssd' on 'mdmdboradev01'CRS-2676: Start of 'ora.diskmon' on 'mdmdboradev01' succeededCRS-2676: Start of 'ora.cssd' on 'mdmdboradev01' succeededCRS-4000: Command Start failed, or completed with errors.

Please ignore the errors.

# Have to register the ASM in OCR$ srvctl add asm -p $ORACLE_HOME/dbs/init+ASM.ora

$ sqlplus / as sysasmSQL> startupASM instance startedORA-15110: no diskgroups mounted

SQL> create spfile='/u01/grid/11.2.0/grid/dbs/spfile+ASM.ora' from pfile='/u01/grid/11.2.0/grid/dbs/init+ASM.ora';

SQL> shutdown immediate

Now that the spfile has been created, remove the $ORACLE_HOME/dbs/init+ASM.ora file and start up the ASM.$ srvctl start asm

Create the Oracle password$ orapwd file=/u01/grid/11.2.0/grid/dbs/orapw+ASM password=<password>

Create the ASMSNMP user$ sqlplus / as sysasmSQL> create user asmsnmp identified by <password>;SQL> grant sysdba to asmsnmp;

Update the host in the $GRID_HOME/network/admin/listener.ora file.

9.2. Method 2: Fresh Install of Grid Infrastructure

Copy the downloaded 11203 software zip files from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/*to /u02/software/11203cd directory.

Unzip all the zip files in /u02/maint/11203cd.

11

Page 12: Home

Request sysadmin to run the rootpre.sh script as root on each node./u02/software/11203cd/grid/rootpre.sh

For RAC installation, run the Oracle cluvfy utility as oragrid to verify that the nodes are ready for the cluster installation. You can to unset CV_HOME first./u02/software/11203cd/grid/runcluvfy.sh stage –pre crsinst –n <node1>,<node2>,… –fixup –verbose

Run the Oracle Installer as user oragrid,$ export DISPLAY=<your_PC_name>:0.0$ cd /u02/software/11203cd/grid$ ./runInstaller –ignoreInternalDriverError

Download Software Updates

Select “Skip software updates”.

Select Installation Option

For RAC, select “Install and Configure Oracle Grid Infrastructure for a Cluster”.For non-RAC, select “Configure Oracle Grid Infrastructure for a Standalone Server”.

Select Installation Type (for RAC only)

Select “Advanced Installation”.

Grid Plug and Play Information (for RAC only)

Specify the cluster name and SCAN name. Set SCAN port to 1521. Do not configure GNS.

Cluster Node Information

Include all the RAC nodes. Click the “Add” button to add the other nodes.

12

Page 13: Home

Specify Network Interface Usage (for RAC only)

Interface Name Subnet Interface Typeen0 Refer to /etc/hosts Publicen1 Refer to /etc/hosts Do Not Useen2 Refer to /etc/hosts Privateen3 Refer to /etc/hosts Private

Storage Option Information (for RAC only)

Choose Automatic Storage Management (ASM)

Create ASM Disk Group

For RAC only, create a disk group for GRID using High redundancy. Choose 5 disks with size 5GB LUN size.

Create other diskgroups for data, redo logs, FRA etc using External redundancy.

Specify ASM Password

Use different passwords for these accounts SYS and ASMSNMP. Keep the passwords in a safe place.

Privileged Operating System Groups

The privileged O/S groups to be specified during installation is as follows:

ASM Database Administrator (OSDBA) Group dbaASM Instance Administrator Operator (OSOPER) Group dbaASM Instance Administrator (OSASM) Group dba

Specify Installation Location

Set Oracle Base to /u01/app/oracle

13

Page 14: Home

Set Software Location to /u01/grid/11.2.0/grid

When warned that the selected Oracle home is outside of Oracle base, click Yes to continue.

Create Inventory

Set Inventory Location to /u01/app/oraInventory

Preform Prerequisite Checks

Ignore the following errors if detectedOS Kernel Parameter: tcp_ephemeral_lowOS Kernel Parameter: tcp_ephemeral_highOS Kernel Parameter: udp_ephemeral_lowOS Kernel Parameter: udp_ephemeral_high

Summary

Click “Save Response File” button.Click the “Install” button.

Execute Configuration scripts

After installation is completed, request sysadmin to run the following scripts as root in each cluster node./u01/app/oraInventory/orainstRoot.sh/u01/grid/11.2.0/grid/root.sh

Applying 11.2.0.3 Grid Infrastructure PSU2

The Oracle Grid Infrastructure 11.2.0.3 PSU2 is 13696251 while the RDBMS PSU2 is 13696216. Both are included in the single zip file p13696251_112030_AIX64-5L.zip. Copy the zip file from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/PSU/APR2012

14

Page 15: Home

to /u02/software/11203cd/PSU/APR2012 directory.

Unzip all the zip files in /u02/maint/11203cd/PSU/APR2012

Do the following on each node.Login as user oragrid,$ . ./ASMenv

Ask sysadm to run this command as root on each nodeFor RAC, run$ $GRID_HOME/crs/install/rootcrs.pl –unlock

For non-RAC standalone, run$ $GRID_HOME/crs/install/roothas.pl –unlock

You will need to have a new version of OPatch to apply the patch,As oragrid,$ cd /u02/software/11203cd/PSU/APR2012$ scp -p oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/OPATCH/*.zip .$ cd /u01/grid/11.2.0/grid$ mv OPatch OLD_OPatch$ cp /u02/software/11203cd/OPATCH/p6880880_112000_AIX64-5L.zip .$ unzip p6880880_112000_AIX64-5L.zip

On each node, login as user oragrid and run the following.$ . ./ASMenv

$ $GRID_HOME/OPatch/opatch napply –oh /u01/grid/11.2.0/grid -local /u02/software/11203cd/PSU/APR2012/13696251

Ask sysadm to run this command as root on each node$ $GRID_HOME/rdbms/install/rootadd_rdbms.sh

For RAC, run$ $GRID_HOME/crs/install/rootcrs.pl –patch

For non-RAC standalone, run$ $GRID_HOME/crs/install/roothas.pl –patch

15

Page 16: Home

10. Oracle Database Software Installation

10.1. Method 1: Clone from non-RAC Database Software Gold Image

The gold image file we will use is standalone non-RAC version of 11.2.0.3 Database Enterprise Edition with PSU2.

First, copy the gold image file to /u02/software/gold_images from oragctst21: /u02/data01/dbmaint/gold_images/AIX/AIX_nonRAC_db_binaries_11.2.0.3.2.tar.gz

As oracle, unzip the gold image to the ORACLE_HOME$ cd /u01/app/oracle/product/11.2.0/db_1 (if not create the directory)$ gunzip -c /u02/software/gold_images/AIX_nonRAC_db_binaries_11.2.0.3.2.tar.gz | tar xf -

As root, run the following, ignore warning on aborting pre-installation procedure.$ /u01/app/oracle/product/11.2.0/db_1/clone/rootpre.sh

As oracle, run the Oracle cloning script. Ignore any warning about configTool.$ cd /u01/app/oracle/product/11.2.0/db_1/clone/bin

# The following command is all in one line$ /usr/bin/perl clone.pl ORACLE_HOME="/u01/app/oracle/product/11.2.0/db_1" ORACLE_BASE="/u01/app/oracle"

As root, run the following$ /u01/app/oraInventory/orainstRoot.sh -- only if GI not installed$ /u01/app/oracle/product/11.2.0/db_1/root.sh

If Grid Infrastructure has been installed, point the SQL*Net files to the Grid Infrastructure location.

As oracle,$ cd $ORACLE_HOME/network$ rm –rf admin$ ln –s /u01/grid/11.2.0/grid/network/admin admin

16

Page 17: Home

10.2. Method 2: Fresh Install of Database Software

Run the following script as user root,$ cd /u02/software/11203cd/database$ ./rootpre.sh

Run the Oracle Installer as user oracle,$ export DISPLAY=<your_PC_name>:0.0$ cd /u02/software/11203cd/database$ ./runInstaller –ignoreInternalDriverError

Download Software Updates

Select “Skip software updates”

Select Installation Option

Select “Install database software only”

Grid Installation Options

For RAC, select “Oracle Real Application Clusters database installation”Check box all the nodes. No node to check SSH Connectivity since we have already checked it.

For non-RAC, select “Single instance database installation”.

Select Database Edition

Select “Enterprise Edition”

Specify Installation Location

Oracle Base: /u01/app/oracle

17

Page 18: Home

Software Location: /u01/app/oracle/product/11.2.0/db_1

Privileged Operating System Groups

Database Administrator (OSDBA) Group: dbaDatabase Operator (OSOPER) Group (Optional): dba

Perform Prerequisite Check

Check bos “Ignore All” if the only check failures are the followings: OS Kernel Parameter: tcp_ephemeral_low OS Kernel Parameter: tcp_ephemeral_high OS Kernel Parameter: udp_ephemeral_low OS Kernel Parameter: udp_ephemeral_high Task resolv.conf Integrity

Summary

Click “Save Response File” button.Click the “Install” button.

Execute Configuration scripts

After installation is completed, request sysadmin to run the following script as root in each cluster node./u01/app/oracle/product/11.2.0/db_1/root.sh

Applying 11.2.0.3 RDBMS PSU2

The Oracle Grid Infrastructure 11.2.0.3 PSU2 is 13696251 while the RDBMS PSU2 is 13696216. Both are included in the single zip file p13696251_112030_AIX64-5L.zip.

18

Page 19: Home

Copy the zip file from oragctst21:/u02/data01/dbmaint/cd_images/AIX/11203/PSU/APR2012to /u02/software/11203cd/PSU/APR2012 directory.

Unzip all the zip files in /u02/maint/11203cd/PSU/APR2012

On each node, login as user oracle and run the following.

$ /u02/software/11203cd/PSU/APR2012/13696251/custom/scripts/prepatch.sh –dbhome /u01/app/oracle/product/11.2.0/db_1

$ $ORACLE_HOME/OPatch/opatch napply –oh /u01/app/oracle/product/11.2.0/db_1 -local /u02/software/11203cd/PSU/APR2012/13696216

$ $ORACLE_HOME/OPatch/opatch apply –oh /u01/app/oracle/product/11.2.0/db_1 -local /u02/software/11203cd/PSU/APR2012/13696216

$ /u02/software/11203cd/PSU/APR2012/13696251/custom/scripts/postpatch.sh –dbhome /u01/app/oracle/product/11.2.0/db_1

11. Adjusting ASM ParametersThe default values of ASM init parameters need to be scaled up to avoid potential node eviction in RAC environment. The ASM load is tied to the number of connections to the RAC databases. Since ASM parameters are stored as a spfile inside ASM. To change its parameters, you have to update the spfile and then bounce the ASM instances by bouncing the cluster.

For large system with plenty of memoryAs user oragrid,$ sqlplus /as sysasmSQL> alter system set memory_target=2560m scope=spfile sid='*';SQL> alter system set memory_max_target=2560m scope=spfile sid='*';SQL> alter system set processes=300 scope=spfile sid='*';SQL> alter system set sga_max_size=2g scope=spfile sid='*';SQL> alter system set large_pool_size=208m scope=spfile sid='*';SQL> alter system set shared_pool_size=1g scope=spfile sid='*';SQL> shutdown immediateSQL> startup

For smaller system with limited memoryAs user oragrid,$ sqlplus /as sysasmSQL> alter system set memory_target=1g scope=spfile sid='*';SQL> alter system set memory_max_target=1g scope=spfile sid='*';

19

Page 20: Home

SQL> alter system set processes=300 scope=spfile sid='*';SQL> alter system set sga_max_size=1g scope=spfile sid='*';SQL> alter system set large_pool_size=100m scope=spfile sid='*';SQL> alter system set shared_pool_size=500m scope=spfile sid='*';SQL> shutdown immediateSQL> startup

12. Create ASM Diskgroup

As oragrid, identify the CANDIDATE disk available to be assigned to ASM,SQL> select path, OS_MB, HEADER_STATUS from v$asm_disk where HEADER_STATUS='CANDIDATE' order by 1;

Based on the database server specification SAN storage section on space requirement, create the scripts to create ASM diskgroup. Here is an example, please change the diskgroup name and disk numbers.

create diskgroup MDM_DATA_01 external redundancy disk '/dev/rhdisk25', '/dev/rhdisk26', '/dev/rhdisk27', '/dev/rhdisk28', '/dev/rhdisk29'/ create diskgroup MDM_FRA external redundancy disk '/dev/rhdisk30', '/dev/rhdisk31', '/dev/rhdisk32', '/dev/rhdisk33', '/dev/rhdisk34', '/dev/rhdisk35', '/dev/rhdisk36', '/dev/rhdisk37'/ create diskgroup MDM_REDO_01 external redundancy disk '/dev/rhdisk15', '/dev/rhdisk16', '/dev/rhdisk17', '/dev/rhdisk18', '/dev/rhdisk19'/ create diskgroup MDM_REDO_02 external redundancy disk '/dev/rhdisk20', '/dev/rhdisk21', '/dev/rhdisk22', '/dev/rhdisk23', '/dev/rhdisk24'/

Verify the ASM diskgroup statusSQL> select name, state, total_mb, free_mb from v$asm_diskgroup;NAME STATE TOTAL_MB FREE_MB------------------------------ ----------- ---------- ----------MDM_DATA_01 MOUNTED 1280000 1279932MDM_FRA MOUNTED 409600 409536MDM_REDO_01 MOUNTED 25600 25542MDM_REDO_02 MOUNTED 25600 25542

For each diskgroup, change the attribute.SQL> alter diskgroup MDM_FRA set attribute 'COMPATIBLE.ASM'='11.2.0.0.0';

20

Page 21: Home

SQL> alter diskgroup MDM_FRA set attribute 'COMPATIBLE.RDBMS'='11.2.0.0.0';SQL> alter diskgroup MDM_FRA set attribute 'DISK_REPAIR_TIME'='12 H';

13. TNS_ADMINThe TNS_ADMIN environment variable should point to /u01/grid/11.2.0/grid/network/admin. This is the central location of tnsnames.ora, listener.ora, and sqlnet.ora files. Do not put any of these files in the RDBMS codetree to cause unnecessary confusion. Instead, replace the /u01/app/oracle/product/11.2.0/db_1/network/admin directory with a soft link pointing to the TNS_ADMIN location.

$ cd /u01/app/oracle/product/11.2.0/db_1/network$ rmdir admin$ ln –s /u01/grid/11.2.0/grid/network/admin admin

14. Register Listener and ASM to Grid InfrastructureBy default, the listeners and ASM will not always be started up automatically after server reboot. We have to run the following commands to add the listener and ASM instance to the Grid Infrastructure so that they will be started automatically.

As user oragrid, shutdown listener first, and then run the following command.$ srvctl add listener -l <listener_name> -o $ORACLE_HOME

For example,$ srvctl add listener -l LISTENER -o $ORACLE_HOME

Now, start the listener.

As user oragrid,Configure ASM to always start up automatically after server starts up.$ crsctl modify resource ora.asm –attr AUTO_START=always

As user oragrid,Run the following crsctl command to verify. You should see the ora.<listener_name>.lsnr and ora.<instance_name>.db resources.

$ crsctl status res -t

Cluster Resources---------------------------------------------------------------------ora.LISTENER.lsnr ONLINE ONLINE etodboratst01 ora.asm ONLINE ONLINE etodboratst01 Started

21

Page 22: Home

15. Post-Install Database Setup

Enforcing archive logging in the database.SQL> alter database force logging;

Set the database password to be not case sensitive.SQL> alter system set sec_case_sensitive_logon=FALSE scope=both sid=’*’;

Move the SYS.AUD$ table from the dictionary managed SYSTEM tablespace to SYSAUX.SQL> exec DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION ( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, audit_trail_location_value => 'SYSAUX');

Enable block change tracking, for example using the EIRS_DATA_01 diskgroup.SQL> alter database enable block change tracking using file '+EIRS_DATA01';

Enable flashback database if needed (optional)SQL> alter database flashback on;

Configure RMAN$ rman target /RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 15 DAYS;RMAN> CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'ENV=(TDPO_OPTFILE=/etc/tdpo.opt)';RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';

If it is on a primary database with standby configured,RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

22