Install Grid Infrastructure 11gR2 on Oracle VM Created by : Hans Camu Date : 19 February 2011 http://camoraict.wordpress.com This paper is the forth in a series describing how-to install Oracle VM Server and several Oracle VM guests. In this paper I will describe how-to install Grid Infrastructure 11gR2 (GI) on 2 Oracle Virtual Machines. The steps described in this paper will be: Create 2 virtual machines on the command line using a installation directory and a kickstart file Configure the virtual machines to be able to successfully install GI Install Grid Infrastructure 11gR2 Install Oracle 11g RDBMS Software Patch GI and database software with latest GI bundle and PSU Create an Oracle 11g RAC database The installation will take place on virtual machines with 4GB of memory. This guide is for testing purposes only. It is not supported to run a production environment with a setup like described in this paper.
75
Embed
Install Grid Infrastructure 11gR2 on Oracle VM - CamOra ICT · PDF fileInstall Grid Infrastructure 11gR2 on Oracle VM Created by : Hans Camu Date : 19 February 2011 This paper is the
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Install Grid Infrastructure 11gR2 on Oracle VM
Created by : Hans Camu Date : 19 February 2011 http://camoraict.wordpress.com This paper is the forth in a series describing how-to install Oracle VM Server and several Oracle VM
guests. In this paper I will describe how-to install Grid Infrastructure 11gR2 (GI) on 2 Oracle Virtual
Machines.
The steps described in this paper will be:
Create 2 virtual machines on the command line using a installation directory and a kickstart file
Configure the virtual machines to be able to successfully install GI
Install Grid Infrastructure 11gR2
Install Oracle 11g RDBMS Software
Patch GI and database software with latest GI bundle and PSU
Create an Oracle 11g RAC database
The installation will take place on virtual machines with 4GB of memory. This guide is for testing
purposes only. It is not supported to run a production environment with a setup like described in this
In the next few steps I will make some preparations to be able to create the virtual machines. First I will create the directories to store the files for the virtual machines: [root@oraovs01 /]# mkdir /OVS/running_pool/indy
[root@oraovs01 /]# mkdir /OVS/running_pool/dean
As you can see I will name the virtual machines indy and dean. A 2-node cluster also behaves like a twin, so I named the nodes after my sisters twins. Now create the files for the virtual machines. I choose to create spare files which will not immediately occupy all space. In time it will grow until the maximum defined size. For the local file system and /u01 this I not really a problem and will not lead to any performance problems. [root@oraovs01 /]# dd if=/dev/zero of=/OVS/running_pool/indy/system.img bs=1G
To create a virtual machine manually you must define the ramdisk and kernel needed for the initial boot. Copy the boot ramdisk and kernel to the /boot direcrory. [root@oraovs01::/root]# cp /mount/OEL5u5_x86_64/images/xen/vmlinuz
The last step before we can actual create the virtual machines is to create a configuration files for both virtual machines. This is the vm.cfg for virtual machine indy:
Page 7 van 75
[root@oraovs01::/root]# vi /OVS/running_pool/indy/vm.cfg
kernel = "/boot/vmlinuz_OEL5_x86_64"
ramdisk = "/boot/initrd_OEL5_x86_64.img"
extra = "text ks=nfs:192.168.0.200:/software/kickstart/OEL5u5_x86_64_GI.cfg"
#bootloader = '/usr/bin/pygrub'
disk = ['file:/OVS/running_pool/indy/system.img,xvda,w',
Now we are ready to create the first virtual machine: [root@oraovs01::/root]# xm create -c /OVS/running_pool/indy/vm.cfg
Because of the -c option a console will be opened I which can perform the action to create the virtual machine.
Page 8 van 75
Action: Select eth0 as the network device to install through. Click OK.
Action: Enable Manual configuration for the IPv4 support. Disable IPv6 support. Click OK.
Page 9 van 75
Action: Specify the TCP/IP configuration for the virtual machine. Click OK.
Action: Select eth1 and click Edit. Eth1 will be configured as the private network for the clusterware communication.
Page 10 van 75
Action: Select Activate on boot and Enable IPv4 support. Click OK.
Action: Select Manual address configuration and specify the TCP/IP configuration for the private network. Click OK.
Page 11 van 75
Action: All network devices are now configured. Click OK.
Action: Accept the default values for the Miscellaneous Network Settings. Click OK.
Page 12 van 75
Action: Specify the Hostname Configuration foe the virtual machine. Click OK.
Based on the specifications in the kickstart file all dependencies for the installation will be checked.
Page 13 van 75
OEL 5.5 64-bit is now installed. This only takes a few minutes.
Action: The installation is now finished and the virtual machine is rebooted. Check if virtual machine is running again: [root@oraovs01::/root]# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 564 8 r----- 2516.4
indy 1 4096 1 -b---- 1.2
Page 14 van 75
Open the console to check what is happening [root@oraovs01::/root]# xm console indy
Action: This is default behavior. The virtual machines restart and will start the installation procedure again. To stop this stop your console session with ctrl] ( control + ] ) Now stop the virtual machine: [root@oraovs01::/root]# xm destroy indy
OR
[root@oraovs01::/root]# xm shutdown indy
Now modify the virtual machines configuration file. You must deactivate the kernel, ramdisk and extra lines and activate the bootloader line: [root@oraovs01::/root]# vi /OVS/running_pool/indy/vm.cfg
Now start the virtual machine again: [root@oraovs01::/root]# xm create -c /OVS/running_pool/indy/vm.cfg
The virtual machine is being started. Just wait for a short time until it's started completely.
Action: You can now login and check if the installation is performed as expected.
Page 16 van 75
You now have one node for your cluster. Now repeat the steps to create the second virtual machine called dean.
Page 17 van 75
2. Create shared ASM disks To be able to install Grid Infrastructure 11gR2 you must have disks which can be shared between the nodes in the cluster. In this chapter I will create these shared disks. A new feature in GI 11gR2 is that you can now store the OCR and votingdisks in ASM. The files created in the next steps will be used to store the OCR and votingdisk and to create a ASM diskgroup to store database files in. Unlike the previous created files it is recommended not create sparse files but fully allocate the files for ASM usage. This will definitely improve the performance of the virtual machines. [root@oraovs01 /]# dd if=/dev/zero of=/OVS/sharedDisk/asmocrvote.img bs=1M
To be able to use these newly created shared disks the configuration file vm.cfg of both virtual machines must be modified: [root@oraovs01::/root]# vi /OVS/running_pool/indy/vm.cfg
disk = ['file:/OVS/running_pool/indy/system.img,xvda,w',
The shared disks can be attached to the virtual machines online. It is not needed to stop the virtual machines first. This can be accomplished with the xm block-attach command: [root@oraovs01::/root]# xm block-attach indy file:/OVS/sharedDisk/asmocrvote.img
Repeat this step for virtual machine dean! After attaching the shared disks check if the devices are available: root@indy::/root
$ ls -l /dev/xvd*
brw-r----- 1 root disk 202, 0 Feb 19 14:13 /dev/xvda
brw-r----- 1 root disk 202, 1 Feb 19 14:13 /dev/xvda1
brw-r----- 1 root disk 202, 2 Feb 19 14:13 /dev/xvda2
brw-r----- 1 root disk 202, 16 Feb 19 14:13 /dev/xvdb
brw-r----- 1 root disk 202, 17 Feb 19 14:13 /dev/xvdb1
brw-r----- 1 root disk 202, 32 Feb 19 14:49 /dev/xvdc
brw-r----- 1 root disk 202, 48 Feb 19 14:50 /dev/xvdd
brw-r----- 1 root disk 202, 64 Feb 19 14:50 /dev/xvde
brw-r----- 1 root disk 202, 80 Feb 19 14:50 /dev/xvdf
Page 18 van 75
The new disks are now available as devices /dev/xvdc-e. Before you can use the devices they must be partitioned first. Partition the devices on only 1 virtual machine: root@::/root
$ fdisk /dev/xvdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Repeat this for devices /dev/xvdd, /dev/xvde and /dev/xvdf. Perform a quick check to see if the partitions are available: root@indy::/root
$ ls -l /dev/xvd*
brw-r----- 1 root disk 202, 0 Feb 19 14:17 /dev/xvda
brw-r----- 1 root disk 202, 1 Feb 19 14:18 /dev/xvda1
brw-r----- 1 root disk 202, 2 Feb 19 14:17 /dev/xvda2
brw-r----- 1 root disk 202, 16 Feb 19 14:17 /dev/xvdb
brw-r----- 1 root disk 202, 17 Feb 19 14:17 /dev/xvdb1
brw-r----- 1 root disk 202, 32 Feb 19 14:55 /dev/xvdc
brw-r----- 1 root disk 202, 33 Feb 19 14:55 /dev/xvdc1
brw-r----- 1 root disk 202, 48 Feb 19 14:58 /dev/xvdd
brw-r----- 1 root disk 202, 49 Feb 19 14:58 /dev/xvdd1
brw-r----- 1 root disk 202, 64 Feb 19 14:58 /dev/xvde
brw-r----- 1 root disk 202, 65 Feb 19 14:58 /dev/xvde1
brw-r----- 1 root disk 202, 80 Feb 19 14:58 /dev/xvdf
brw-r----- 1 root disk 202, 81 Feb 19 14:58 /dev/xvdf1
Now run the partprobe command to update the kernel with the modified partition table.
root@indy::/root
$ partprobe /dev/xvdc
root@indy::/root
$ partprobe /dev/xvdd
root@indy::/root
$ partprobe /dev/xvde
root@indy::/root
$ partprobe /dev/xvdf
Repeat this step for virtual machine dean!
Page 19 van 75
Now the devices are ready they must be given the correct permissions. Otherwise they can't be used while installing Grid Infrastructure. There are multiple ways to accomplish this like UDEV rules, ASMLib and multipath rules. I will use UDEV rules to set the permissions but also to give the devices a logical name. First create the UDEV permissions file for the ASM disk devices: root@indy::/root
Now activate the UDEV rules (on both nodes indy and dean): root@indy::/root
$ /sbin/udevcontrol reload_rules
root@indy::/root
$ /sbin/start_udev
Starting udev: [ OK ]
Check if the permissions are set through correctly and if the devices are created: $ l /dev/asm*
brw-rw---- 1 oracle asmadmin 202, 49 Feb 20 12:41 /dev/asmdisk1p1
brw-rw---- 1 oracle asmadmin 202, 65 Feb 20 12:41 /dev/asmdisk2p1
brw-rw---- 1 oracle asmadmin 202, 81 Feb 20 12:41 /dev/asmdisk3p1
brw-rw---- 1 oracle asmadmin 202, 33 Feb 20 12:41 /dev/asmocrvote1p1
If you want to check if the configuration from above steps will also function correctly after a node reboot then this is the time to test this.
Page 20 van 75
3. Install some additional OS packages In this chapter I will install some additional OS packages. The first is the command line wrapper rlwrap. With this tool it is possible to track back previous
commands in command line tools like sqlplus, rman and so on. Download rlwrap from here: rlwrap. root@indy::/root
4. Configure NTP For an Oracle cluster to function correctly is is of most importance that some kind of time synchronization is in place. This is possible with the new CTSS (Cluster Time Synchronization Service) daemon. If prefer to configure NTP on the hosts. First make sure the guest will not synchronize with dom0. This is done by adding the xen.independent_wallclock parameter to the /etc/sysctl.conf file: root@indy::/root
$ vi /etc/sysctl.conf
xen.independent_wallclock = 1
To activate the parameter: root@indy::/root
$ sysctl -p
The NTP skewing option must be configured. Also prevent syncing the hardware clock to avoid NTP start errors: root@indy::/root
$ vi /etc/sysconfig/ntpd
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -x"
SYNC_HWCLOCK=no
root@indy::/root
$ chmod -x /sbin/hwclock
Now the NTP daemon can be starten: root@indy::/root
$ service ntpd start
ntpd: Synchronizing with time server: [ OK ]
Starting ntpd: [ OK ]
NTP must also be started when the node has te be rebooted. This can be accomplished with the chkconfig utility: root@indy::/root
$ chkconfig ntpd on
And with the same chkconfig you can check the modifications: root@indy::/root
$ chkconfig --list ntpd
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Page 22 van 75
5. Install Grid Infrastructure 11gR2 We are now almost ready to start installing Grid Infrastructure. First we hace to download the software. In this paper I use version 11.2.0.2 which can only be download as patch from My Oracle Support (patch 10098816).
For this installation you only need to download: p10098816_112020_Linux-x86-64_1of7.zip Database binaries part 1 p10098816_112020_Linux-x86-64_2of7.zip Database binaries part 2 p10098816_112020_Linux-x86-64_3of7.zip Grid Infrastructure binaries Unzip the files in your staging area after downloading the files. To prepare the OS to install GI without additional steps during the installation the cvudisk package must be installed before installation. This package is available as part of p10098816_112020_Linux-x86-64_3of7.zip.
Now we can start installing GI 11gR2. Set the DISPLAY parameter and start runInstaller.sh. oracle@indy::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@indy::/software
$ cd /software/Database/11.2.0.2/grid/
oracle@indy::/software/Database/11.2.0.2/grid
$ ./runInstaller
Action: Select Skip software updates. Click Next.
Page 24 van 75
Action: Select Install and Configure Oracle Grid Infrastructure fo a Cluster. Click Next.
Action: Select Advanced Installation. Click Next.
Page 25 van 75
Action: Select the language of your choice. Click Next.
Action: SCAN is a new 11gR2 feature. SCAN (Single Client Access Name) makes it possible to resolve up to 3 IP addresses with 1 single name. Best practice is to configure your SCAN IP addresses in DNS.
Page 26 van 75
For this paper I will use the local /etc/hosts file to resolve the SCAN address. Because of this choice it is only possible to resolve 1 IP address. At this point of the installation you don't have to take additional steps to configure the /etc/hosts file because this was already taken care of while installing the virtual machine. This was one of the steps defined in the kickstart file. We not use the GNS (Global Naming Service) feature in this paper. Action: Specify the Cluster Name, SCAN Name and SCAN Port. Deselect Configure GNS. Click Next.
Action: Click Edit.
Page 27 van 75
Action: Remove domain name from the entries. Click OK.
Action: Click Add to add the 2nd virtual machine to the cluster.
Page 28 van 75
Action: Specify the Hostname and the Virtual IP Name. Click OK.
Action: At this point is it possible to let the installer configure the SSH Connectivity between the nodes. Click SSH Connectivity.
Page 29 van 75
Action: Specify the oracle OS Password. Click Setup.
Action: Wait until the SSH Connectivity is setup.
Page 30 van 75
Action: Click OK.
Action: Click Next.
Page 31 van 75
Action: The network interfaces are configured correctly. Click Next.
Action: At this point no disks are displayed, but they are there! Click Change Discovery Path.
Action: Specify the Disk Discovery Path as /dev/asm*. Click OK.
Page 33 van 75
Action: Specify the Disk Group Name, the Redundancy as External and select the Candidate Disk used for the ASM diskgroup to store the OCR ans votingdisk. Click Next.
Action: Specify the passwords for the SYS and ASMSNMP accounts. Click Next.
Page 34 van 75
Action: Select Do not use Intelligent Platform Management Interface (IPNI). Click Next.
Action: Specify dba as the Oracle ASM Operator (OSOPER for ASM) Group. Click Next.
Page 35 van 75
Action: Click Yes.
Action: Specify the Oracle Base and Software Location for the GI home. Click Next.
Page 36 van 75
Action: Specify the Inventory Directory. Click Next.
Action: Click Yes.
Page 37 van 75
Action: Wait until some checks are performed.
2 checks are returned with errors. The Device Checks for ASM point out to Bug 10357213: ASM DEVICE CHECK FAILS WITH PRVF-5184 DURING GI INSTALL and can be ignored. This will have no impact on the installation.
Page 38 van 75
Because the OS memory is greater than 4GB oracle recommends to configure Huge Pages. But I also want to use the Automatic Memory Management feature for the database and this feature is not compatible with Huge Pages. So I will ignore this error. Action: Select Ignore All and Click Next.
Action: Click Install.
Page 39 van 75
Action: Wait while GI gets installed.
Action: Execute the scripts as user root on the local node first and then on the second node.
Page 40 van 75
First node (indy): root@indy::/root
$ /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oracle/oraInventory to dba.
The execution of the script is complete.
root@indy::/root
$ /u01/app/grid/11.2.0.2/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/grid/11.2.0.2
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS
daemon on node indy, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the
cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Now click OK.
Action: Wait until some the last configuration steps are being performed.
Page 43 van 75
This error is returned because I didn't setup DNS for the SCAN feature but added it to the host file. For this reason this error can safely be ignored. INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking TCP connectivity to SCAN Listeners...
INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes
INFO: Checking name resolution setup for "gridcl01-scan"...
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name
"gridcl01-scan"
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "gridcl01-scan" (IP address:
192.168.0.212) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name
"gridcl01-scan"
INFO: Verification of SCAN VIP and Listener setup failed
Action: Click OK.
Page 44 van 75
Action: Click Skip.
Action: Click Next.
Page 45 van 75
Action: Click Yes.
Action: Click Close. Perform a quick check to see if all GI processes are available oracle@indy::/home/oracle
$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
At this point the base installation of the GI software is completed.
Page 47 van 75
6. Install Oracle 11gR2 RDBMS software We are now ready to continue with the installation of the Oracle 11g RDBMS software so we can
create a RAC-database ina next step. Like the GI software I will use Oracle RDBMS version 11.2.0.2.
In a previous step I already downloaded the software needed for this step. Set the DISPLAY parameter and start runInstaller.sh. oracle@indy::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@indy::/software
$ cd /software/Database/11.2.0.2/database
oracle@indy::/software/Database/11.2.0.2/database
$ ./runInstaller
Action: Deselect I wish to receive security updates via My Oracle Support. Click Next.
Repeat this for $ORACLE_HOME /u01/app/oracle/product/11.2.0.2/db_000 Now I will Install GI Bundle #1. The next steps must be performed on all nodes: oracle@indy::/home/oracle
$ +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
OK, an error, but this one we can ignore because it only tells us that a subset of the patch is not installed which was already installed while installing the database section of the GI bundle #1.
Page 61 van 75
8. Create ASM diskgroup for database files We are almost ready to create a RAC-database, but first we have to create an ASM diskgroup to store the database files. I will use the utility asmca for this purpose. Set the DISPLAY parameter and environment and start asmca oracle@indy::/home/oracle
$ export DISPLAY=192.168.0.105:0.0
oracle@indy::/home/oracle
$ +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@indy:+ASM1:/home/oracle
$ asmca
Action: Click Create.
Page 62 van 75
Action: Specify the Disk Group Name, Redundancy as External (None) and select the Candidate Disks to be part of the ASM diskgroup. Click OK.
Action: Wait while the ASM diskgroup is being created.
Page 63 van 75
Action: Click OK.
Action: Click Exit.
Action: Click Yes.
Page 64 van 75
9. Create RAC Database And finally we are now able to create an Oracle RAC-database. I will give an example by creating a database using the dbca utility and some options. Which options you must choose is dependent of your needs. Set the DISPLAY parameter and environment and start asmca oracle@indy::/home/oracle
Action: Specify the Passwords for the SYS and SYSTEM accounts. Click Next.
Action: Select Use Oracle-Managed Files and +DGDATA as Database Area. Click Next.
Page 68 van 75
Action: Deselect Specify Fast Recovery Area. Select Enable Archiving and click Edit Archive Mode Parameters.
Action: Specify +DGDATA as Archive Log Destination. Click OK.
Page 69 van 75
Action: Click Next.
Action: Select all the options you want to install in your database. Click Next.
Page 70 van 75
Action: Under Typical specify the Memory Size (SGA and PGA). A minimum size of 1024 MB is recommended. This will also avoid that you will get some ORA-04031 errors while creating the database. Select Use Automatic Memory Management. Click tab Character Sets.
Action: Select Use Unicode (AL32UTF8) as Database Character Set and UTF8 as National Character Set. Click Next.
Page 71 van 75
Action: Click Next.
Action: If you are curious about the scripts generated by the dbca utility then select Generate Database Creation Scripts. Click Finish to start creating the RAC database.
Page 72 van 75
Action: Click OK.
Action: Click OK.
Page 73 van 75
Action: Wait while the RAC database is being created.
Action: Click Exit.
Page 74 van 75
Check if all instances are running $ srvctl status database -d odba1
Instance ODBA12 is running on node dean
Instance ODBA11 is running on node indy
Once the database is created, edit the /etc/oratab file and add the instance.
First node (indy):
ODBA11:/u01/app/oracle/product/11.2.0.2/db_000:N
Second node (dean):
ODBA12:/u01/app/oracle/product/11.2.0.2/db_000:N
Page 75 van 75
10. Loading Modified SQL Files into the Database
In chapter 7 Install Oracle 11gR2 RDBMS and GI patches we installed the latest PSU. To load the
modified SQL files in the database follow the next steps: