Download ppt from http://www.slideshare.net/slideshow/embed_code/14108797
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Oracle 11g R2 Real Application Cluster Implementation on RHEL 5.0 Please download the Presentation from http://www.slideshare.net/slideshow/embed_code/14108797 I have added all concepts in this ppt.
Specification: 1) 2 Node RAC setup 2) OS: Each Node machine with Red Hat Enterprise Linux Server release 5 (x86_64 bit, Kernel: 2.6.18-8.el5) 3) Storage server: 73 GB, Ubuntu 12.04 (x86_64 bit) 4) Oracle 11gR2 Database 5) Oracle 11gR2 Grid Infrastructure.
Pre-Installation: 1) All the nodes should have same OS and Kernel version and bits (x86 or x86_64)
For Linux use $uname –a command, it should show same output on all the nodes.
2) Each of the servers should have 2 NIC cards - Where group of 2 (one from each machine) should have same name(say eth0/eth1) and IP subnet Mask Ex: 1) Each of the machines has eth0 as Ethernet card number and IP on subnet say 10.88.33.X (Public Network)
2) Each of the machines has eth1 as Ethernet card number and IP on subnet say 10.88.32.X (Private Network for interconnects) - Both of these groups on different VLANs as above. all can co-exist on the same Physical switch; that should not be an issue
3) Configure VIP by adding entries in /etc/hosts file (VIPs must be from same subnet as that of Public IPs) 4) Stop ntpd service: Linux syncs the system time using NTP protocol. In RAC oracle has cluster time sync service
built in. if ntpd service is up then oracle will use it else oracle will create cluster synchronization service; It’s recommended that we should use oracle cluster sync service. So will disable the ntpd service on all the nodes: [root@PTS0009 ~]# service ntpd stop ntpd is stopped [root@PTS0009 ~]# chkconfig ntpd off [root@PTS0009 ~]# chkconfig --list ntpd ntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root@PTS0009 ~]# chkconfig ntpd --del [root@PTS0009 ~]# mv /etc/ntp.conf /etc/ntp.conf.org [root@PTS0009 ~]# mv /var/run/ntpd.pid /var/run/ntpd.pid.org
5) Configure SCAN (Single Client Access
Name:http://www.oracle.com/technetwork/products/clustering/overview/scan-129069.pdf) I have DNS setup in my network so I will use the same. If you don’t have DNS setup you can hardcode these IP addresses under /etc/hosts file. SCAN using DNS: 1) Add Scan entry on DNS server with 3 IP addresses for same host. Enable Round Robin on DNS (Demo: http://www.oraclemasters.in/?p=1296#dns) Ex: ORARAC-SCAN: 10.88.33.24
ORARAC-SCAN: 10.88.33.25 ORARAC-SCAN: 10.88.33.26 2) Once the entry is added in DNS test it from Local Client as root@PTS0009:~# nslookup orarac-scan Server: 10.77.224.101 Address: 10.77.224.101#53 Name: orarac-scan Address: 10.88.33.26 Name: orarac-scan Address: 10.88.33.24 Name: orarac-scan Address: 10.88.33.25 If you execute this command multiple times, you will notice that the list of IP addresses is varying in Round robin fashion. If not then ask your DNS admin to enable round robin on server. 3) These above 3 IP addresses are Virtual IPs. So your Ping should fail for each of them till the time we install Grid clusterware.
<< Execute following setups 6-9 on all the nodes >>
6) make sure each of the nodes has following Packages Installed: Oracle Software Prerequisites Install required packages:
binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 elfutils-libelf-0.125 elfutils-libelf-devel-0.125 elfutils-libelf-devel-static-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-common-2.5 glibc-devel-2.5 glibc-headers-2.5 kernel-headers-2.6.18 ksh-20060214 libaio-0.3.106 libaio-devel-0.3.106 libgcc-4.1.2 libgomp-4.1.2 libstdc++-4.1.2 libstdc++-devel-4.1.2 make-3.81 numactl-devel-0.9.8.i386 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-devel-2.2.11 iscsi-initiator-utils-6.2.0.868-0.7.el5<-- For iscsi based SAN configuration
7) Modify the kernel parameter file
Login as root and appended the following entries in /etc/sysctl.conf file.: kernel.shmmax = 2147483648 kernel.shmmni = 4096
10) SAN Storage setup as shown in PPT Device name not persistent after reboot of RAC nodes:
I have seen that the iscsi device names are changed after reboot of RAC nodes. For e.g, device /dev/sda1 now becomes /dev/sdb1 after reboot. This behavior has caused very serious issues for the OCR and Vote disks as well as the disks formatted with ocfs2 devices. They don't get mounted automatically because they are not persistent after reboot. A utility called "devlabel" that are developed by Dell Inc and it is available to download free from Dell's Official Website. This utility creates the symlink to the device names by considering physical devices' UUID. So, even after the reboot, the UUID for any devices stays at it is and so the symlink that you create using devlabel always points to the UUID of the device.
1) Create Symlinks [root@PTS0006 Libs]# rpm -ivhU devlabel-0.48.03-10.x86_64.rpm warning: devlabel-0.48.03-10.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 025e513b Preparing... ########################################### [100%] 1:devlabel ########################################### [100%] [root@PTS0006 Libs]# [root@PTS0006 Libs]# ls -l /dev/sd* brw-r----- 1 root disk 8, 0 Aug 9 16:57 /dev/sda brw-r----- 1 root disk 8, 1 Aug 9 16:57 /dev/sda1 brw-r----- 1 root disk 8, 16 Aug 9 16:57 /dev/sdb brw-r----- 1 root disk 8, 17 Aug 9 16:58 /dev/sdb1 brw-r----- 1 root disk 8, 32 Aug 9 16:55 /dev/sdc brw-r----- 1 root disk 8, 33 Aug 9 16:55 /dev/sdc1 [root@PTS0006 Libs]# devlabel add -d /dev/sda1 -s /dev/ocr SYMLINK: /dev/ocr -> /dev/sda1 Added /dev/ocr to /etc/sysconfig/devlabel [root@PTS0006 Libs]# devlabel add -d /dev/sdb1 -s /dev/oradata SYMLINK: /dev/oradata -> /dev/sdb1 Added /dev/oradata to /etc/sysconfig/devlabel [root@PTS0006 Libs]# devlabel add -d /dev/sdc1 -s /dev/ocfs2 [root@PTS0006 Libs]# cat /etc/sysconfig/devlabel # devlabel configuration file # # This file should generally not be edited by hand. # Instead, use the /sbin/devlabel program to make changes. # devlabel by Gary Lerhaupt <[email protected]> # # format: <SYMLINK><DEVICE><UUID> # or format: <RAWDEVICE><DEVICE><UUID> /dev/ocr /dev/sda1 S83.1:49455400000000006c756e30000000000000000000000000IETVIRTUAL-DISKsector32
2) Update the /etc/rc.local files (All RAC nodes): After reboot of RAC nodes, the devlabel does not get started automatically so add the reload command in /etc/rc.local [root@PTS0006 Libs]# devlabel reload SYMLINK: /dev/ocr -> /dev/sda1 SYMLINK: /dev/oradata -> /dev/sdb1 [root@PTS0006 Libs]# vi /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local service iscsi restart devlabel reload echo "Just to check if all Ethernet are up" ifup eth0 ifup eth1
Steps for Oracle RAC: 1) Create group: dba, oinstall, asmadmin, asmdba, asmoper on all the nodes for Oracle software and ASM 2) Create users: Oracle (For oracle software owner) and grid (for oracle cluster owner) as follows:
$ id grid uid=501(grid) gid=501(oinstall) groups=501(oinstall),502(asmadmin),503(asmdba),504(asmoper) $ id oracle uid=500(oracle) gid=501(oinstall) groups=501(oinstall),500(dba),503(asmdba)
3) Configure Oracle ASM: [root@PTS0009]# oracleasm configure ORACLEASM_ENABLED=false ORACLEASM_UID= ORACLEASM_GID= ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER="" ORACLEASM_SCANEXCLUDE="" [root@PTS0009]# oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done [root@PTS0009]# oracleasm configure ORACLEASM_ENABLED=true
Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk "DATA1" [root@PTS0009 systemfiles]# oracleasm listdisks DATA1 VOTE1
Install Oracle 11R2 Clusterware: Log in as grid user and execute runInstaller.sh script. Following are the screen dump to show the details.
If you notice above .gsd and ora.oc4j are offline. But that should be ok as these are required only for legacy support for oracle 9i. Now all VIP and SCAN IPs will respond to ping. [grid@PTS0009 ~]$ ping 10.88.33.27
PING 10.88.33.27 (10.88.33.27) 56(84) bytes of data.
64 bytes from 10.88.33.27: icmp_seq=1 ttl=64 time=1.07 ms
64 bytes from 10.88.33.27: icmp_seq=2 ttl=64 time=0.096 ms
64 bytes from 10.88.33.27: icmp_seq=3 ttl=64 time=0.096 ms
--- 10.88.33.27 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.096/0.423/1.077/0.462 ms
[grid@PTS0009 ~]$ ping 10.88.33.28
PING 10.88.33.28 (10.88.33.28) 56(84) bytes of data.
64 bytes from 10.88.33.28: icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from 10.88.33.28: icmp_seq=2 ttl=64 time=0.018 ms
64 bytes from 10.88.33.28: icmp_seq=3 ttl=64 time=0.013 ms
--- 10.88.33.28 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.013/0.019/0.028/0.008 ms
[grid@PTS0009 ~]$ ping 10.88.33.24
PING 10.88.33.24 (10.88.33.24) 56(84) bytes of data.
64 bytes from 10.88.33.24: icmp_seq=1 ttl=64 time=0.025 ms
To resize the shared memory I referredhttp://www.walkernews.net/2010/05/04/how-to-resize-devshm-filesystem-in-linux/ But after this ASM instance on 2nd node (PTS0006) went down. And I fought for 3Hrs and found that its silly simple command! $/grid_home/bin/srvctl status asm $/grid_home/bin/srvctl start asm Here it is ……. You can verify all the recourses by /grid_home/bin/crsctl status resource –t