Top Banner
Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI by Jeffrey Hunter Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700. The information in this guide is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only. Updated November 2009 Contents Introduction 1. Oracle RAC 11g Overview 2. Shared-Storage Overview 3. iSCSI Technology 4. Hardware and Costs 5. Install the Linux Operating System 6. Install Required Linux Packages for Oracle RAC 7. Network Configuration 8. Cluster Time Synchronization Service 9. Install Openfiler 10. Configure iSCSI Volumes using Openfiler 11. Configure iSCSI Volumes on Oracle RAC Nodes 12. Create Job Role Separation Operating System Privileges Groups, Users, and Directories 13. Logging In to a Remote System Using X Terminal 14. Configure the Linux Servers for Oracle 15. Configure RAC Nodes for Remote Access using SSH - (Optional) 16. All Startup Commands for Both Oracle RAC Nodes 17. Install and Configure ASMLib 2.0 18. Download Oracle RAC 11g Release 2 Software 19. Preinstallation Tasks for Oracle Grid Infrastructure for a Cluster 20. Install Oracle Grid Infrastructure for a Cluster 21. Postinstallation Tasks for Oracle Grid Infrastructure for a Cluster 22. Create ASM Disk Groups for Data and Fast Recovery Area 23. Install Oracle Database 11g with Oracle Real Application Clusters 24. Install Oracle Database 11g Examples (formerly Companion) 25. Create the Oracle Cluster Database 26. Post Database Creation Tasks - (Optional) 27. Create / Alter Tablespaces 28. Verify Oracle Grid Infrastructure and Database Configuration 29. Starting / Stopping the Cluster 30. Troubleshooting 31. Conclusion 32. Acknowledgements 33. Downloads for this guide: Oracle Enterprise Linux Release 5 Update 4 (Available for x86 and x86_64) Oracle Database 11g Release 2, Grid Infrastructure, Examples (11.2.0.1.0) (Available for x86 and x86_64) Openfiler 2.3 Respin (21-01-09) ( openfiler-2.3-x86-disc1.iso -OR- openfiler-2.3-x86_64-disc1.iso) ASMLib 2.0 Library RHEL5 - (2.0.4-1) ( oracleasmlib-2.0.4-1.el5.i386.rpm -OR- oracleasmlib-2.0.4-1.el5.x86_64.rpm) 1. Introduction One of the most efficient ways to become familiar with Oracle Real Application Clusters (RAC) 11g technology is to have access to an actual Oracle RAC 11g cluster. There's no better way to understand its benefits—including fault tolerance, security, load balancing, and scalability—than to experience them directly. Unfortunately, for many shops, the price of the hardware required for a typical production RAC configuration makes this goal impossible. A small two-node cluster can cost from US$10,000 to well over US$20,000. This cost would not even include the heart of a production RAC environment, the shared storage. In most cases, this would be a Storage Area Network (SAN), which generally start at US$10,000. For those who want to become familiar with Oracle RAC 11g without a major cash outlay, this guide provides a low-cost alternative to configuring an Oracle RAC 11g Release 2 system using commercial off-the-shelf components and downloadable software at an estimated cost of US$2,200 to US$2,700. The system will consist of a two node cluster, both running Oracle Enterprise Linux (OEL) Release 5 Update 4 for x86_64, Oracle RAC 11g Release 2 for Linux x86_64, and ASMLib 2.0. All shared disk storage for Oracle RAC will be based on iSCSI using Openfiler release 2.3 x86_64 running on a third node (known in this article as the Network Storage Server). Although this article should work with Red Hat Enterprise Linux, Oracle Enterprise Linux (available for free) will provide the same if not better stability and will already include the ASMLib software packages (with the exception of the ASMLib userspace libraries which is a separate download). This guide is provided for educational purposes only, so the setup is kept simple to demonstrate ideas and concepts. For example, the shared Oracle Clusterware files (OCR and voting files) and all physical database files in this article will be set up on only one physical disk, while in practice that should be configured on multiple physical drives. In addition, each Linux node will only be configured with two network interfaces one for the public network ( eth0) and one that will be used for both the Oracle RAC private interconnect "and" the network storage server for shared iSCSI access ( eth1). For a production RAC implementation, the private interconnect should be at least Gigabit (or more) with redundant paths and "only" be used by Oracle to transfer Cluster Manager and Cache Fusion related data. A third dedicated network interface ( eth2, for example) should be configured on another redundant Gigabit network for access to the network storage server (Openfiler). Oracle Documentation While this guide provides detailed instructions for successfully installing a complete Oracle RAC 11g system, it is by no means a substitute for the official Oracle documentation (see list below) . In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, and administration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com. Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677... 1 of 33 3/6/2012 9:52 AM
83
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 11gR2 RAC Openfiler Install Page1 2 3

Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI

by Jeffrey Hunter

Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700.

The information in this guide is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for

educational purposes only.

Updated November 2009

Contents

Introduction1.Oracle RAC 11g Overview2.

Shared-Storage Overview3.iSCSI Technology4.Hardware and Costs5.Install the Linux Operating System6.Install Required Linux Packages for Oracle RAC7.Network Configuration8.Cluster Time Synchronization Service9.Install Openfiler10.

Configure iSCSI Volumes using Openfiler11.Configure iSCSI Volumes on Oracle RAC Nodes12.Create Job Role Separation Operating System Privileges Groups, Users, and Directories13.Logging In to a Remote System Using X Terminal14.Configure the Linux Servers for Oracle15.Configure RAC Nodes for Remote Access using SSH - (Optional)16.All Startup Commands for Both Oracle RAC Nodes17.Install and Configure ASMLib 2.018.Download Oracle RAC 11g Release 2 Software19.

Preinstallation Tasks for Oracle Grid Infrastructure for a Cluster20.Install Oracle Grid Infrastructure for a Cluster21.Postinstallation Tasks for Oracle Grid Infrastructure for a Cluster22.Create ASM Disk Groups for Data and Fast Recovery Area23.Install Oracle Database 11g with Oracle Real Application Clusters24.

Install Oracle Database 11g Examples (formerly Companion)25.

Create the Oracle Cluster Database26.Post Database Creation Tasks - (Optional)27.Create / Alter Tablespaces28.Verify Oracle Grid Infrastructure and Database Configuration29.Starting / Stopping the Cluster30.Troubleshooting31.Conclusion32.Acknowledgements33.

Downloads for this guide: Oracle Enterprise Linux Release 5 Update 4 (Available for x86 and x86_64) Oracle Database 11g Release 2, Grid Infrastructure, Examples (11.2.0.1.0) (Available for x86 and x86_64)

Openfiler 2.3 Respin (21-01-09) ( openfiler-2.3-x86-disc1.iso -OR- openfiler-2.3-x86_64-disc1.iso) ASMLib 2.0 Library RHEL5 - (2.0.4-1) ( oracleasmlib-2.0.4-1.el5.i386.rpm -OR- oracleasmlib-2.0.4-1.el5.x86_64.rpm)

1. Introduction

One of the most efficient ways to become familiar with Oracle Real Application Clusters (RAC) 11g technology is to have access to an actual Oracle RAC

11g cluster. There's no better way to understand its benefits—including fault tolerance, security, load balancing, and scalability—than to experience them

directly.

Unfortunately, for many shops, the price of the hardware required for a typical production RAC configuration makes this goal impossible. A small

two-node cluster can cost from US$10,000 to well over US$20,000. This cost would not even include the heart of a production RAC environment, theshared storage. In most cases, this would be a Storage Area Network (SAN), which generally start at US$10,000.

For those who want to become familiar with Oracle RAC 11g without a major cash outlay, this guide provides a low-cost alternative to configuring an

Oracle RAC 11g Release 2 system using commercial off-the-shelf components and downloadable software at an estimated cost of US$2,200 to

US$2,700. The system will consist of a two node cluster, both running Oracle Enterprise Linux (OEL) Release 5 Update 4 for x86_64, Oracle RAC 11g

Release 2 for Linux x86_64, and ASMLib 2.0. All shared disk storage for Oracle RAC will be based on iSCSI using Openfiler release 2.3 x86_64 runningon a third node (known in this article as the Network Storage Server).

Although this article should work with Red Hat Enterprise Linux, Oracle Enterprise Linux (available for free) will provide the same if not better stability andwill already include the ASMLib software packages (with the exception of the ASMLib userspace libraries which is a separate download).

This guide is provided for educational purposes only, so the setup is kept simple to demonstrate ideas and concepts. For example, the shared OracleClusterware files (OCR and voting files) and all physical database files in this article will be set up on only one physical disk, while in practice that shouldbe configured on multiple physical drives. In addition, each Linux node will only be configured with two network interfaces one for the public network (eth0) and one that will be used for both the Oracle RAC private interconnect "and" the network storage server for shared iSCSI access ( eth1). For a

production RAC implementation, the private interconnect should be at least Gigabit (or more) with redundant paths and "only" be used by Oracle totransfer Cluster Manager and Cache Fusion related data. A third dedicated network interface ( eth2, for example) should be configured on another

redundant Gigabit network for access to the network storage server (Openfiler).

Oracle Documentation

While this guide provides detailed instructions for successfully installing a complete Oracle RAC 11g system, it is by no means a substitute for the

official Oracle documentation (see list below) . In addition to this guide, users should also consult the following Oracle documents to gain a fullunderstanding of alternative configuration options, installation, and administration with Oracle RAC 11g. Oracle's official documentation site is

docs.oracle.com.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

1 of 33 3/6/2012 9:52 AM

Page 2: 11gR2 RAC Openfiler Install Page1 2 3

Oracle Grid Infrastructure Installation Guide - 11g Release 2 (11.2) for Linux

Clusterware Administration and Deployment Guide - 11g Release 2 (11.2)

Oracle Real Application Clusters Installation Guide - 11g Release 2 (11.2) for Linux and UNIX

Real Application Clusters Administration and Deployment Guide - 11g Release 2 (11.2)

Oracle Database 2 Day + Real Application Clusters Guide - 11g Release 2 (11.2)

Oracle Database Storage Administrator's Guide - 11g Release 2 (11.2)

Network Storage Server

Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS)and block-based Storage Area Networking (SAN) in a single framework. The entire software stack interfaces with open source applications such asApache, Samba, LVM2, ext3, Linux NFS and iSCSI Enterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to managesolution fronted by a powerful web-based management interface.

Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for theshared storage components required by Oracle RAC 11g. The operating system and Openfiler application will be installed on one internal SATA disk. A

second internal 73GB 15K SCSI hard disk will be configured as a single "Volume Group" that will be used for all shared disk storage requirements. TheOpenfiler server will be configured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store the

shared files required by Oracle grid infrastructure and the Oracle RAC database.

Oracle Grid Infrastructure 11g Release 2 (11.2)

With Oracle grid infrastructure 11g Release 2 (11.2), the Automatic Storage Management (ASM) and Oracle Clusterware software is packaged together

in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. You must install the gridinfrastructure in order to use Oracle RAC 11g Release 2. Configuration assistants start after the installer interview process that configure ASM and

Oracle Clusterware. While the installation of the combined products is called Oracle grid infrastructure, Oracle Clusterware and Automatic StorageManager remain separate products.

After Oracle grid infrastructure is installed and configured on both nodes in the cluster, the next step will be to install the Oracle RAC software on bothOracle RAC nodes.

In this article, the Oracle grid infrastructure and Oracle RAC software will be installed on both nodes using the optional Job Role Separation

configuration. One OS user will be created to own each Oracle software product " grid" for the Oracle grid infrastructure owner and " oracle" for the

Oracle RAC software. Throughout this article, a user created to own the Oracle grid infrastructure binaries is called the grid user. This user will own

both the Oracle Clusterware and Oracle Automatic Storage Management binaries. The user created to own the Oracle database binaries (Oracle RAC)

will be called the oracle user. Both Oracle software owners must have the Oracle Inventory group ( oinstall) as their primary group, so that each

Oracle software installation owner can write to the central inventory (oraInventory), and so that OCR and Oracle Clusterware resource permissions areset correctly. The Oracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups.

Automatic Storage Management and Oracle Clusterware Files

As previously mentioned, Automatic Storage Management (ASM) is now fully integrated with Oracle Clusterware in the Oracle grid infrastructure. OracleASM and Oracle Database 11g Release 2 provide a more enhanced storage solution from previous releases. Part of this solution is the ability to store

the Oracle Clusterware files; namely the Oracle Cluster Registry (OCR) and the Voting Files (VF also known as the Voting Disks) on ASM. This featureenables ASM to provide a unified storage solution, storing all the data for the clusterware and the database, without the need for third-party volumemanagers or cluster file systems.

Just like database files, Oracle Clusterware files are stored in an ASM disk group and therefore utilize the ASM disk group configuration with respect toredundancy. For example, a Normal Redundancy ASM disk group will hold a two-way-mirrored OCR. A failure of one disk in the disk group will not

prevent access to the OCR. With a High Redundancy ASM disk group (three-way-mirrored), two independent disks can fail without impacting access to

the OCR. With External Redundancy, no protection is provided by Oracle.

Oracle only allows one OCR per disk group in order to protect against physical disk failures. When configuring Oracle Clusterware files on a productionsystem, Oracle recommends using either normal or high redundancy ASM disk groups. If disk mirroring is already occurring at either the OS or hardwarelevel, you can use external redundancy.

The Voting Files are managed in a similar way to the OCR. They follow the ASM disk group configuration with respect to redundancy, but are notmanaged as normal ASM files in the disk group. Instead, each voting disk is placed on a specific disk in the disk group. The disk and the location of theVoting Files on the disks are stored internally within Oracle Clusterware.

The following example describes how the Oracle Clusterware files are stored in ASM after installing Oracle grid infrastructure using this guide. To viewthe OCR, use ASMCMD:

[grid@racnode1 ~]$

asmcmdASMCMD>

ls -l +CRS/racnode-cluster/OCRFILE

Type Redund Striped Time Sys Name

OCRFILE UNPROT COARSE NOV 22 12:00:00 Y REGISTRY.255.703024853

+CRS/racnode-cluster/OCRFILEREGISTRY.255.703024853crsctl query css votedisk

[grid@racnode1 ~]$

crsctl query css votedisk## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]

Located 1 voting disk(s).

If you decide against using ASM for the OCR and voting disk files, Oracle Clusterware still allows these files to be stored on a cluster file system likeOracle Cluster File System release 2 (OCFS2) or a NFS system. Please note that installing Oracle Clusterware files on raw or block devices is no longersupported, unless an existing system is being upgraded.

Previous versions of this guide used OCFS2 for storing the OCR and voting disk files. This guide will store the OCR and voting disk files on ASM in anASM disk group named +CRS using external redundancy which is one OCR location and one voting disk location. The ASM disk group should be be

created on shared storage and be at least 2GB in size.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

2 of 33 3/6/2012 9:52 AM

Page 3: 11gR2 RAC Openfiler Install Page1 2 3

The Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed on ASM in an ASM disk group named+RACDB_DATA while the Fast Recovery Area will be created in a separate ASM disk group named +FRA.

The two Oracle RAC nodes and the network storage server will be configured as follows:

Nodes

Node Name Instance Name Database Name Processor RAM Operating System

racnode1 racdb1racdb.idevelopment.info

1 x Dual Core Intel Xeon, 3.00 GHz 4GB OEL 5.4 - (x86_64)

racnode2 racdb2 1 x Dual Core Intel Xeon, 3.00 GHz 4GB OEL 5.4 - (x86_64)

openfiler1 2 x Intel Xeon, 3.00 GHz 6GB Openfiler 2.3 - (x86_64)

Network Configuration

Node Name Public IP Private IP Virtual IP SCAN Name SCAN IP

racnode1 192.168.1.151 192.168.2.151 192.168.1.251

racnode-cluster-scan 192.168.1.187racnode2 192.168.1.152 192.168.2.152 192.168.1.252

openfiler1 192.168.1.195 192.168.2.195

Oracle Software Components

Software Component OS User Primary Group Supplementary Groups Home Directory Oracle Base / Oracle Home

Grid Infrastructure grid oinstall asmadmin, asmdba, asmoper /home/grid/u01/app/grid

/u01/app/11.2.0/grid

Oracle RAC oracle oinstall dba, oper, asmdba /home/oracle/u01/app/oracle

/u01/app/oracle/product/11.2.0/dbhome_1

Storage Components

Storage Component File System Volume Size ASM Volume Group Name ASM Redundancy Openfiler Volume Name

OCR/Voting Disk ASM 2GB +CRS External racdb-crs1

Database Files ASM 32GB +RACDB_DATA External racdb-data1

Fast Recovery Area ASM 32GB +FRA External racdb-fra1

This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choice of vendor hardware (i.e.machines, networking equipment, and internal / external hard drives). Ensure that the hardware you purchase from the vendor is supported on EnterpriseLinux 5 and Openfiler 2.3 (Final Release).

If you are looking for an example that takes advantage of Oracle RAC 10g release 2 with Oracle Enterprise Linux 5.3 using iSCSI, click here.

2. Oracle RAC 11g Overview

Before introducing the details for building a RAC cluster, it might be helpful to first clarify what a cluster is. A cluster is a group of two or moreinterconnected computers or servers that appear as if they are one server to end users and applications and generally share the same set of physicaldisks. The key benefit of clustering is to provide a highly available framework where the failure of one node (for example a database server running aninstance of Oracle) does not bring down an entire application. In the case of failure with one of the servers, the other surviving server (or servers) cantake over the workload from the failed server and the application continues to function normally as if nothing has happened.

The concept of clustering computers actually started several decades ago. The first successful cluster product was developed by DataPoint in 1977named ARCnet. The ARCnet product enjoyed much success by academia types in research labs, but didn't really take off in the commercial market. Itwasn't until the 1980's when Digital Equipment Corporation (DEC) released its VAX cluster product for the VAX/VMS operating system.

With the release of Oracle 6 for the Digital VAX cluster product, Oracle was the first commercial database to support clustering at the database level. Itwasn't long, however, before Oracle realized the need for a more efficient and scalable distributed lock manager (DLM) as the one included with theVAX/VMS cluster product was not well suited for database applications. Oracle decided to design and write their own DLM for the VAX/VMS clusterproduct which provided the fine-grain block level locking required by the database. Oracle's own DLM was included in Oracle 6.2 which gave birth toOracle Parallel Server (OPS) - the first database to run the parallel server.

By Oracle 7, OPS was extended to included support for not only the VAX/VMS cluster product but also with most flavors of UNIX. This frameworkrequired vendor-supplied clusterware which worked well, but made for a complex environment to setup and manage given the multiple layers involved.By Oracle8, Oracle introduced a generic lock manager that was integrated into the Oracle kernel. In later releases of Oracle, this became known as theIntegrated Distributed Lock Manager (IDLM) and relied on an additional layer known as the Operating System Dependant (OSD) layer. This new modelpaved the way for Oracle to not only have their own DLM, but to also create their own clusterware product in future releases.

Oracle Real Application Clusters (RAC), introduced with Oracle9i, is the successor to Oracle Parallel Server. Using the same IDLM, Oracle 9i could still

rely on external clusterware but was the first release to include their own clusterware product named Cluster Ready Services (CRS). With Oracle 9i, CRS

was only available for Windows and Linux. By Oracle 10g release 1, Oracle's clusterware product was available for all operating systems and was the

required cluster technology for Oracle RAC. With the release of Oracle Database 10g Release 2 (10.2), Cluster Ready Services was renamed to Oracle

Clusterware. When using Oracle 10g or higher, Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC

operates (except for Tru cluster, in which case you need vendor clusterware). You can still use clusterware from other vendors if the clusterware iscertified, but keep in mind that Oracle RAC still requires Oracle Clusterware as it is fully integrated with the database software. This guide uses OracleClusterware which as of 11g Release 2 (11.2), is now a component of Oracle grid infrastructure.

Like OPS, Oracle RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing,and performance benefits by allowing the system to scale out, and at the same time since all instances access the same database, the failure of one

node will not cause the loss of access to the database.

At the heart of Oracle RAC is a shared disk subsystem. Each instance in the cluster must be able to access all of the data, redo log files, control files andparameter file for all other instances in the cluster. The data disks must be globally available in order to allow all instances to access the database. Eachinstance has its own redo log files and UNDO tablespace that are locally read-writeable. The other instances in the cluster must be able to access them(read-only) in order to recover that instance in the event of a system failure. The redo log files for an instance are only writeable by that instance and willonly be read from another instance during system failure. The UNDO, on the other hand, is read all the time during normal database operation (e.g. forCR fabrication).

A big difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one instance to another required thedata to be written to disk first, then the requesting instance can read that data (after acquiring the required locks). This process was called disk pinging.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

3 of 33 3/6/2012 9:52 AM

Page 4: 11gR2 RAC Openfiler Install Page1 2 3

With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm.

Not all database clustering solutions use shared storage. Some vendors use an approach known as a Federated Cluster, in which data is spread across

several machines rather than shared by all. With Oracle RAC, however, multiple instances use the same set of disks for storing data. Oracle's approachto clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security.

Pre-configured Oracle RAC solutions are available from vendors such as Dell, IBM and HP for production environments. This article, however, focuseson putting together your own Oracle RAC 11g environment for development and testing by using Linux servers and a low cost shared disk solution;

iSCSI.

For more background about Oracle RAC, visit the Oracle RAC Product Center on OTN.

3. Shared-Storage Overview

Today, fibre channel is one of the most popular solutions for shared storage. As mentioned earlier, fibre channel is a high-speed serial-transfer interfacethat is used to connect systems and storage devices in either point-to-point (FC-P2P), arbitrated loop (FC-AL), or switched topologies (FC-SW).Protocols supported by Fibre Channel include SCSI and IP. Fibre channel configurations can support as many as 127 nodes and have a throughput of up

to 2.12 Gigabits per second in each direction, and 4.25 Gbps is expected.

Fibre channel, however, is very expensive. Just the fibre channel switch alone can start at around US$1,000. This does not even include the fibrechannel storage array and high-end drives, which can reach prices of about US$300 for a single 36GB drive. A typical fibre channel setup which includesfibre channel cards for the servers is roughly US$10,000, which does not include the cost of the servers that make up the cluster.

A less expensive alternative to fibre channel is SCSI. SCSI technology provides acceptable performance for shared storage, but for administrators anddevelopers who are used to GPL-based Linux prices, even SCSI can come in over budget, at around US$2,000 to US$5,000 for a two-node cluster.

Another popular solution is the Sun NFS (Network File System) found on a NAS. It can be used for shared storage but only if you are using a networkappliance or something similar. Specifically, you need servers that guarantee direct I/O over NFS, TCP as the transport protocol, and read/write blocksizes of 32K. See the Certify page on Oracle Metalink for supported Network Attached Storage (NAS) devices that can be used with Oracle RAC. One ofthe key drawbacks that has limited the benefits of using NFS and NAS for database storage has been performance degradation and complexconfiguration requirements. Standard NFS client software (client systems that use the operating system provided NFS driver) is not optimized for Oracledatabase file I/O access patterns. With the introduction of Oracle 11g, a new feature known as Direct NFS Client integrates the NFS client functionality

directly in the Oracle software. Through this integration, Oracle is able to optimize the I/O path between the Oracle software and the NFS server resultingin significant performance gains. Direct NFS Client can simplify, and in many cases automate, the performance optimization of the NFS clientconfiguration for database workloads. To learn more about Direct NFS Client, see the Oracle White Paper entitled " Oracle Database 11g Direct NFS

Client ".

The shared storage that will be used for this article is based on iSCSI technology using a network storage server installed with Openfiler. This solution

offers a low-cost alternative to fibre channel for testing and educational purposes, but given the low-end hardware being used, it should not be used in aproduction environment.

4. iSCSI Technology

For many years, the only technology that existed for building a network based storage solution was a Fibre Channel Storage Area Network (FC SAN).Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI), Fibre Channel was developed to move SCSI commands over a

storage network.

Several of the advantages to FC SAN include greater performance, increased disk utilization, improved availability, better scalability, and most importantto us support for server clustering! Still today, however, FC SANs suffer from three major disadvantages. The first is price. While the costs involved inbuilding a FC SAN have come down in recent years, the cost of entry still remains prohibitive for small companies with limited IT budgets. The second isincompatible hardware components. Since its adoption, many product manufacturers have interpreted the Fibre Channel specifications differently fromeach other which has resulted in scores of interconnect problems. When purchasing Fibre Channel components from a common manufacturer, this isusually not a problem. The third disadvantage is the fact that a Fibre Channel network is not Ethernet! It requires a separate network technology alongwith a second set of skill sets that need to exist with the data center staff.

With the popularity of Gigabit Ethernet and the demand for lower cost, Fibre Channel has recently been given a run for its money by iSCSI-based storagesystems. Today, iSCSI SANs remain the leading competitor to FC SANs.

Ratified on February 11, 2003 by the Internet Engineering Task Force (IETF), the Internet Small Computer System Interface, better known as iSCSI, is anInternet Protocol (IP)-based storage networking standard for establishing and managing connections between IP-based storage devices, hosts, andclients. iSCSI is a data transport protocol defined in the SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible forcarrying block-level data over a storage network. Block-level communication means that data is transferred between the host and the client in chunkscalled blocks. Database servers depend on this type of communication (as opposed to the file level communication used by most NAS systems) in orderto work properly. Like a FC SAN, an iSCSI SAN should be a separate physical network devoted entirely to storage, however, its components can bemuch the same as in a typical IP network (LAN).

While iSCSI has a promising future, many of its early critics were quick to point out some of its inherent shortcomings with regards to performance. Thebeauty of iSCSI is its ability to utilize an already familiar IP network as its transport mechanism. The TCP/IP protocol, however, is very complex and CPUintensive. With iSCSI, most of the processing of the data (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which ishandled completely in hardware. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. For manythe solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP and iSCSI processing from a server'sCPU. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor (HBA) or a TCP Offload Engine (TOE) card. Also consider that10-Gigabit Ethernet is a reality today!

As with any new technology, iSCSI comes with its own set of acronyms and terminology. For the purpose of this article, it is only important to understandthe difference between an iSCSI initiator and an iSCSI target.

iSCSI Initiator

Basically, an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in this case aniSCSI target). The iSCSI initiator software will need to exist on each of the Oracle RAC nodes ( racnode1 and racnode2).

An iSCSI initiator can be implemented using either software or hardware. Software iSCSI initiators are available for most major operatingsystem platforms. For this article, we will be using the free Linux Open-iSCSI software driver found in the iscsi-initiator-utils

RPM. The iSCSI software initiator is generally used with a standard network interface card (NIC) a Gigabit Ethernet card in most cases.A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE) card), which is basically just a specialized Ethernet card with a SCSIASIC on-board to offload all the work (TCP and SCSI commands) from the system CPU. iSCSI HBAs are available from a number ofvendors, including Adaptec, Alacritech, Intel, and QLogic.

iSCSI Target

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

4 of 33 3/6/2012 9:52 AM

Page 5: 11gR2 RAC Openfiler Install Page1 2 3

An iSCSI target is the "server" component of an iSCSI network. This is typically the storage device that contains the information you wantand answers requests from the initiator(s). For the purpose of this article, the node openfiler1 will be the iSCSI target.

So with all of this talk about iSCSI, does this mean the death of Fibre Channel anytime soon? Probably not. Fibre Channel has clearly demonstrated itscapabilities over the years with its capacity for extremely high speeds, flexibility, and robust reliability. Customers who have strict requirements for highperformance storage, large complex connectivity, and mission critical reliability will undoubtedly continue to choose Fibre Channel.

Before closing out this section, I thought it would be appropriate to present the following chart that shows speed comparisons of the various types of diskinterfaces and network technologies. For each interface, I provide the maximum transfer rates in kilobits (kb), kilobytes (KB), megabits (Mb), megabytes(MB), gigabits (Gb), and gigabytes (GB) per second with some of the more common ones highlighted in grey.

Disk Interface / Network / BUSSpeed

Kb KB Mb MB Gb GB

Serial 115 14.375 0.115 0.014

Parallel (standard) 920 115 0.92 0.115

10Base-T Ethernet 10 1.25

IEEE 802.11b wireless Wi-Fi (2.4 GHz band) 11 1.375

USB 1.1 12 1.5

Parallel (ECP/EPP) 24 3

SCSI-1 40 5

IEEE 802.11g wireless WLAN (2.4 GHz band) 54 6.75

SCSI-2 (Fast SCSI / Fast Narrow SCSI) 80 10

100Base-T Ethernet (Fast Ethernet) 100 12.5

ATA/100 (parallel) 100 12.5

IDE 133.6 16.7

Fast Wide SCSI (Wide SCSI) 160 20

Ultra SCSI (SCSI-3 / Fast-20 / Ultra Narrow) 160 20

Ultra IDE 264 33

Wide Ultra SCSI (Fast Wide 20) 320 40

Ultra2 SCSI 320 40

FireWire 400 - (IEEE1394a) 400 50

USB 2.0 480 60

Wide Ultra2 SCSI 640 80

Ultra3 SCSI 640 80

FireWire 800 - (IEEE1394b) 800 100

Gigabit Ethernet 1000 125 1

PCI - (33 MHz / 32-bit) 1064 133 1.064

Serial ATA I - (SATA I) 1200 150 1.2

Wide Ultra3 SCSI 1280 160 1.28

Ultra160 SCSI 1280 160 1.28

PCI - (33 MHz / 64-bit) 2128 266 2.128

PCI - (66 MHz / 32-bit) 2128 266 2.128

AGP 1x - (66 MHz / 32-bit) 2128 266 2.128

Serial ATA II - (SATA II) 2400 300 2.4

Ultra320 SCSI 2560 320 2.56

FC-AL Fibre Channel 3200 400 3.2

PCI-Express x1 - (bidirectional) 4000 500 4

PCI - (66 MHz / 64-bit) 4256 532 4.256

AGP 2x - (133 MHz / 32-bit) 4264 533 4.264

Serial ATA III - (SATA III) 4800 600 4.8

PCI-X - (100 MHz / 64-bit) 6400 800 6.4

PCI-X - (133 MHz / 64-bit) 1064 8.512 1

AGP 4x - (266 MHz / 32-bit) 1066 8.528 1

10G Ethernet - (IEEE 802.3ae) 1250 10 1.25

PCI-Express x4 - (bidirectional) 2000 16 2

AGP 8x - (533 MHz / 32-bit) 2133 17.064 2.1

PCI-Express x8 - (bidirectional) 4000 32 4

PCI-Express x16 - (bidirectional) 8000 64 8

5. Hardware and Costs

The hardware used to build our example Oracle RAC 11g environment consists of three Linux servers (two Oracle RAC nodes and one Network Storage

Server) and components that can be purchased at many local computer stores or over the Internet.

Oracle RAC Node 1 - (racnode1)

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

5 of 33 3/6/2012 9:52 AM

Page 6: 11gR2 RAC Openfiler Install Page1 2 3

Dell PowerEdge T100

Dual Core Intel(R) Xeon(R) E3110, 3.0 GHz, 6MB Cache, 1333MHz4GB, DDR2, 800MHz160GB 7.2K RPM SATA 3Gbps Hard DriveIntegrated Graphics - (ATI ES1000)Integrated Gigabit Ethernet - (Broadcom(R) NetXtreme IITM 5722)16x DVD DriveNo Keyboard, Monitor, or Mouse - (Connected to KVM Switch)

US$450

1 - Ethernet LAN Card

Used for RAC interconnect to racnode2 and Openfiler networked storage.

Each Linux server for Oracle RAC should contain two NIC adapters. The Dell PowerEdgeT100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC thatwill be used to connect to the public network. A second NIC adapter will be used for theprivate network (RAC interconnect and Openfiler networked storage). Select the appropriateNIC adapter that is compatible with the maximum data transmission speed of the networkswitch to be used for the private network. For the purpose of this article, I used a GigabitEthernet switch (and a 1Gb Ethernet card) for the private network.

Gigabit Ethernet

Intel(R) PRO/1000 PT Server Adapter - (EXPI9400PT)US$90

Oracle RAC Node 2 - (racnode2)

Dell PowerEdge T100

Dual Core Intel(R) Xeon(R) E3110, 3.0 GHz, 6MB Cache, 1333MHz4GB, DDR2, 800MHz160GB 7.2K RPM SATA 3Gbps Hard DriveIntegrated Graphics - (ATI ES1000)Integrated Gigabit Ethernet - (Broadcom(R) NetXtreme IITM 5722)16x DVD DriveNo Keyboard, Monitor, or Mouse - (Connected to KVM Switch)

US$450

1 - Ethernet LAN Card

Used for RAC interconnect to racnode1 and Openfiler networked storage.

Each Linux server for Oracle RAC should contain two NIC adapters. The Dell PowerEdgeT100 includes an embedded Broadcom(R) NetXtreme IITM 5722 Gigabit Ethernet NIC thatwill be used to connect to the public network. A second NIC adapter will be used for theprivate network (RAC interconnect and Openfiler networked storage). Select the appropriateNIC adapter that is compatible with the maximum data transmission speed of the networkswitch to be used for the private network. For the purpose of this article, I used a GigabitEthernet switch (and a 1Gb Ethernet card) for the private network.

Gigabit Ethernet

Intel(R) PRO/1000 PT Server Adapter - (EXPI9400PT)US$90

Network Storage Server - (openfiler1)

Dell PowerEdge 1800

Dual 3.0GHz Xeon / 1MB Cache / 800FSB (SL7PE)6GB of ECC Memory500GB SATA Internal Hard Disk73GB 15K SCSI Internal Hard DiskIntegrated GraphicsSingle embedded Intel 10/100/1000 Gigabit NIC16x DVD DriveNo Keyboard, Monitor, or Mouse - (Connected to KVM Switch)

Note: The operating system and Openfiler application will be installed on the 500GB internalSATA disk. A second internal 73GB 15K SCSI hard disk will be configured for the databasestorage. The Openfiler server will be configured to use this second hard disk for iSCSI basedstorage and will be used in our Oracle RAC 11g configuration to store the shared files

required by Oracle Clusterware as well as the clustered database files.

Please be aware that any type of hard disk (internal or external) should work for databasestorage as long as it can be recognized by the network storage server (Openfiler) and hasadequate space. For example, I could have made an extra partition on the 500GB internalSATA disk for the iSCSI target, but decided to make use of the faster SCSI disk for this

example.US$800

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

6 of 33 3/6/2012 9:52 AM

Page 7: 11gR2 RAC Openfiler Install Page1 2 3

1 - Ethernet LAN Card

Used for networked storage on the private network.

The Network Storage Server (Openfiler server) should contain two NIC adapters. The DellPowerEdge 1800 machine included an integrated 10/100/1000 Ethernet adapter that will beused to connect to the public network. The second NIC adapter will be used for the privatenetwork (Openfiler networked storage). Select the appropriate NIC adapter that is compatiblewith the maximum data transmission speed of the network switch to be used for the privatenetwork. For the purpose of this article, I used a Gigabit Ethernet switch (and 1Gb Ethernetcard) for the private network.

Gigabit Ethernet

Intel(R) PRO/1000 MT Server Adapter - (PWLA8490MT)US$125

Miscellaneous Components

1 - Ethernet Switch

Used for the interconnect between racnode1-priv and racnode2-priv which will be on

the 192.168.2.0 network. This switch will also be used for network storage traffic for

Openfiler. For the purpose of this article, I used a Gigabit Ethernet switch (and 1Gb Ethernetcards) for the private network.

Gigabit Ethernet

D-Link 8-port 10/100/1000 Desktop Switch - (DGS-2208)US$50

6 - Network Cables

Category 6 patch cable - (Connect racnode1 to public network)Category 6 patch cable - (Connect racnode2 to public network)Category 6 patch cable - (Connect openfiler1 to public network)Category 6 patch cable - (Connect racnode1 to interconnect Ethernet switch)Category 6 patch cable - (Connect racnode2 to interconnect Ethernet switch)Category 6 patch cable - (Connect openfiler1 to interconnect Ethernet switch)

US$10US$10US$10US$10US$10US$10

Optional Components

KVM Switch

This guide requires access to the console of all nodes (servers) in order to install theoperating system and perform several of the configuration tasks. When managing a verysmall number of servers, it might make sense to connect each server with its own monitor,

keyboard, and mouse in order to access its console. However, as the number of servers tomanage increases, this solution becomes unfeasible. A more practical solution would be toconfigure a dedicated computer which would include a single monitor, keyboard, and mousethat would have direct access to the console of each server. This solution is made possibleusing a Keyboard, Video, Mouse Switch better known as a KVM Switch. A KVM switch is ahardware device that allows a user to control multiple computers from a single keyboard,video monitor and mouse. Avocent provides a high quality and economical 4-port switchwhich includes four 6' cables:

SwitchView 1000 - (4SV1000BND1-001)

For a detailed explanation and guide on the use and KVM switches, please see the article "KVM Switches For the Home and the Enterprise".

US$340

Total US$2,455

We are about to start the installation process. Now that we have talked about the hardware that will be used in this example, let's take a conceptual lookat what the environment would look like after connecting all of the hardware components (click on the graphic below to view larger image):

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

7 of 33 3/6/2012 9:52 AM

Page 8: 11gR2 RAC Openfiler Install Page1 2 3

Figure 1: Architecture

As we start to go into the details of the installation, note that most of the tasks within this document will need to be performed on both Oracle RAC nodes(racnode1 and racnode2). I will indicate at the beginning of each section whether or not the task(s) should be performed on both Oracle RAC nodes or onthe network storage server (openfiler1).

6. Install the Linux Operating System

Perform the following installation on both Oracle RAC nodes in the cluster.

This section provides a summary of the screens used to install the Linux operating system. This guide is designed to work with Oracle Enterprise Linuxrelease 5 update 4 for x86_64 and follows Oracle's suggestion of performing a "default RPMs" installation type to ensure all expected Linux O/Spackages are present for a successful Oracle RDBMS installation.

Before installing the Oracle Enterprise Linux operating system on both Oracle RAC nodes, you should have both NIC interface cards installed that will beused for the public and private network.

Download the following ISO images for Oracle Enterprise Linux release 5 update 4 for either x86 or x86_64 depending on your hardware architecture.

Oracle Software Delivery Cloud for Oracle Enterprise Linux

32-bit (x86) Installations

V17787-01.zip (582 MB)V17789-01.zip (612 MB)V17790-01.zip (620 MB)V17791-01.zip (619 MB)V17792-01.zip (267 MB)

After downloading the Oracle Enterprise Linux operating system, unzip each of the files. You will then have the following ISO images which will need tobe burned to CDs:

Enterprise-R5-U4-Server-i386-disc1.isoEnterprise-R5-U4-Server-i386-disc2.isoEnterprise-R5-U4-Server-i386-disc3.isoEnterprise-R5-U4-Server-i386-disc4.isoEnterprise-R5-U4-Server-i386-disc5.iso

Note: If the Linux RAC nodes have a DVD installed, you may find it more convenient to make use of the single DVD image:

V17793-01.zip (2.7 GB)

Unzip the single DVD image file and burn it to a DVD:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

8 of 33 3/6/2012 9:52 AM

Page 9: 11gR2 RAC Openfiler Install Page1 2 3

Enterprise-R5-U4-Server-i386-dvd.iso

64-bit (x86_64) Installations

V17795-01.zip (580 MB)V17796-01.zip (615 MB)V17797-01.zip (605 MB)V17798-01.zip (616 MB)V17799-01.zip (597 MB)V17800-01.zip (198 MB)

After downloading the Oracle Enterprise Linux operating system, unzip each of the files. You will then have the following ISO images which will need tobe burned to CDs:

Enterprise-R5-U4-Server-x86_64-disc1.isoEnterprise-R5-U4-Server-x86_64-disc2.isoEnterprise-R5-U4-Server-x86_64-disc3.isoEnterprise-R5-U4-Server-x86_64-disc4.isoEnterprise-R5-U4-Server-x86_64-disc5.isoEnterprise-R5-U4-Server-x86_64-disc6.iso

Note: If the Linux RAC nodes have a DVD installed, you may find it more convenient to make use of the single DVD image:

V17794-01.zip (3.2 GB)

Unzip the single DVD image file and burn it to a DVD:

Enterprise-R5-U4-Server-x86_64-dvd.iso

If you are downloading the above ISO files to a MS Windows machine, there are many options for burning these images (ISO files) to a CD/DVD. Youmay already be familiar with and have the proper software to burn images to a CD/DVD. If you are not familiar with this process and do not have therequired software to burn images to a CD/DVD, here are just two (of many) software packages that can be used:

UltraISOMagic ISO Maker

After downloading and burning the Oracle Enterprise Linux images (ISO files) to CD/DVD, insert OEL Disk #1 into the first server ( racnode1 in this

example), power it on, and answer the installation screen prompts as noted below. After completing the Linux installation on the first node, perform the

same Linux installation on the second node while substituting the node name racnode1 for racnode2 and the different IP addresses where

appropriate.

Boot ScreenThe first screen is the Oracle Enterprise Linux boot screen. At the boot: prompt, hit [Enter] to start the installation process.

Media TestWhen asked to test the CD media, tab over to [Skip] and hit [Enter]. If there were any errors, the media burning software would have warned us. Afterseveral seconds, the installer should then detect the video card, monitor, and mouse. The installer then goes into GUI mode.

Welcome to Oracle Enterprise LinuxAt the welcome screen, click [Next] to continue.

Language / Keyboard SelectionThe next two screens prompt you for the Language and Keyboard settings. Make the appropriate selections for your configuration.

Detect Previous InstallationNote that if the installer detects a previous version of Oracle Enterprise Linux, it will ask if you would like to "Install Enterprise Linux" or "Upgrade anexisting Installation". Always select to "Install Enterprise Linux".

Disk Partitioning SetupSelect [Remove all partitions on selected drives and create default layout] and check the option to [Review and modify partitioning layout]. Click [Next] tocontinue.

You will then be prompted with a dialog window asking if you really want to remove all Linux partitions. Click [Yes] to acknowledge this warning.

PartitioningThe installer will then allow you to view (and modify if needed) the disk partitions it automatically selected. For most automatic layouts, the installer willchoose 100MB for /boot, double the amount of RAM (systems with <= 2,048MB RAM) or an amount equal to RAM (systems with > 2,048MB RAM) for

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

9 of 33 3/6/2012 9:52 AM

Page 10: 11gR2 RAC Openfiler Install Page1 2 3

swap, and the rest going to the root ( /) partition. Starting with RHEL 4, the installer will create the same disk configuration as just noted but will create

them using the Logical Volume Manager (LVM). For example, it will partition the first hard drive ( /dev/sda for my configuration) into two partitions —

one for the /boot partition ( /dev/sda1) and the remainder of the disk dedicate to a LVM named VolGroup00 ( /dev/sda2). The LVM Volume Group

(VolGroup00) is then partitioned into two LVM partitions - one for the root filesystem ( /) and another for swap.

The main concern during the partitioning phase is to ensure enough swap space is allocated as required by Oracle (which is a multiple of the availableRAM). The following is Oracle's minimum requirement for swap space:

Available RAM Swap Space Required

Between 1,024MB and 2,048MB 1.5 times the size of RAM

Between 2,049MB and 8,192MB Equal to the size of RAM

More than 8,192MB 0.75 times the size of RAM

For the purpose of this install, I will accept all automatically preferred sizes. (Including 5,952MB for swap since I have 4GB of RAM installed.)

If for any reason, the automatic layout does not configure an adequate amount of swap space, you can easily change that from this screen. To increasethe size of the swap partition, [Edit] the volume group VolGroup00. This will bring up the "Edit LVM Volume Group: VolGroup00" dialog. First, [Edit] anddecrease the size of the root file system ( /) by the amount you want to add to the swap partition. For example, to add another 512MB to swap, you

would decrease the size of the root file system by 512MB (i.e. 36,032MB - 512MB = 35,520MB). Now add the space you decreased from the root filesystem (512MB) to the swap partition. When completed, click [OK] on the "Edit LVM Volume Group: VolGroup00" dialog.

Once you are satisfied with the disk layout, click [Next] to continue.

Boot Loader ConfigurationThe installer will use the GRUB boot loader by default. To use the GRUB boot loader, accept all default values and click [Next] to continue.

Network ConfigurationI made sure to install both NIC interfaces (cards) in each of the Linux machines before starting the operating system installation. This screen should havesuccessfully detected each of the network devices. Since we will be using this machine to host an Oracle database, there will be several changes thatneed to be made to the network configuration. The settings you make here will, of course, depend on your network configuration. The key point to makeis that the machine should never be configured with DHCP since it will be used to host the Oracle database server. You will need to configure themachine with static IP addresses. You will also need to configure the server with a real host name.

First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1 by default.

Second, [Edit] both eth0 and eth1 as follows. Verify that the option "Enable IPv4 support" is selected. Click off the option to use "Dynamic IP

configuration (DHCP)" by selecting the "Manual configuration" radio button and configure a static IP address and Netmask for your environment. Click offthe option to "Enable IPv6 support". You may choose to use different IP addresses for both eth0 and eth1 that I have documented in this guide and that

is OK. Put eth1 (the interconnect) on a different subnet than eth0 (the public network):

eth0:- Check ON the option to [Enable IPv4 support]- Check OFF the option to use [Dynamic IP configuration (DHCP)] - (select Manual configuration) IPv4 Address: 192.168.1.151

Prefix (Netmask): 255.255.255.0

- Check OFF the option to [Enable IPv6 support]

eth1:- Check ON the option to [Enable IPv4 support]- Check OFF the option to use [Dynamic IP configuration (DHCP)] - (select Manual configuration) IPv4 Address: 192.168.2.151

Prefix (Netmask): 255.255.255.0

- Check OFF the option to [Enable IPv6 support]

Continue by manually setting your hostname. I used " racnode1" for the first node and " racnode2" for the second. Finish this dialog off by supplying

your gateway and DNS servers.

Time Zone SelectionSelect the appropriate time zone for your environment and click [Next] to continue.

Set Root PasswordSelect a root password and click [Next] to continue.

Package Installation DefaultsBy default, Oracle Enterprise Linux installs most of the software required for a typical server. There are several other packages (RPMs), however, thatare required to successfully install the Oracle software. The installer includes a "Customize software" selection that allows the addition of RPM groupingssuch as "Development Libraries" or "Legacy Library Support". The addition of such RPM groupings is not an issue. De-selecting any "default RPM"groupings or individual RPMs, however, can result in failed Oracle grid infrastructure and Oracle RAC installation attempts.

For the purpose of this article, select the radio button [Customize now] and click [Next] to continue.

This is where you pick the packages to install. Most of the packages required for the Oracle software are grouped into "Package Groups" (i.e. Application-> Editors). Since these nodes will be hosting the Oracle grid infrastructure and Oracle RAC software, verify that at least the following package groupsare selected for install. For many of the Linux package groups, not all of the packages associated with that group get selected for installation. (Note the"Optional packages" button after selecting a package group.) So although the package group gets selected for install, some of the packages required byOracle do not get installed. In fact, there are some packages that are required by Oracle that do not belong to any of the available package groups (i.e.

libaio-devel). Not to worry. A complete list of required packages for Oracle grid infrastructure 11g Release 2 and Oracle RAC 11g Release 2 for

Oracle Enterprise Linux 5 will be provided in the next section. These packages will need to be manually installed from the Oracle Enterprise Linux CDsafter the operating system install. For now, install the following package groups:

Desktop EnvironmentsGNOME Desktop Environment

ApplicationsEditorsGraphical InternetText-based Internet

Development

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

10 of 33 3/6/2012 9:52 AM

Page 11: 11gR2 RAC Openfiler Install Page1 2 3

Development LibrariesDevelopment ToolsLegacy Software Development

ServersServer Configuration Tools

Base SystemAdministration ToolsBaseJavaLegacy Software SupportSystem ToolsX Window System

In addition to the above packages, select any additional packages you wish to install for this node keeping in mind to NOT de-select any of the "default"RPM packages . After selecting the packages to install click [Next] to continue.

About to InstallThis screen is basically a confirmation screen. Click [Next] to start the installation. If you are installing Oracle Enterprise Linux using CDs, you will beasked to switch CDs during the installation process depending on which packages you selected.

CongratulationsAnd that's it. You have successfully installed Oracle Enterprise Linux on the first node (racnode1). The installer will eject the CD/DVD from the CD-ROMdrive. Take out the CD/DVD and click [Reboot] to reboot the system.

Post Installation Wizard Welcome ScreenWhen the system boots into Oracle Enterprise Linux for the first time, it will prompt you with another Welcome screen for the "Post Installation Wizard".The post installation wizard allows you to make final O/S configuration settings. On the "Welcome" screen, click [Forward] to continue.

License AgreementRead through the license agreement. Choose "Yes, I agree to the License Agreement" and click [Forward] to continue.

FirewallOn this screen, make sure to select the [Disabled] option and click [Forward] to continue.

You will be prompted with a warning dialog about not setting the firewall. When this occurs, click [Yes] to continue.

SELinuxOn the SELinux screen, choose the [Disabled] option and click [Forward] to continue.

You will be prompted with a warning dialog warning that changing the SELinux setting will require rebooting the system so the entire file system can berelabeled. When this occurs, click [Yes] to acknowledge a reboot of the system will occur after firstboot (Post Installation Wizard) is completed.

Kdump

Accept the default setting on the Kdump screen (disabled) and click [Forward] to continue.

Date and Time SettingsAdjust the date and time settings if necessary and click [Forward] to continue.

Create UserCreate any additional (non-oracle) operating system user accounts if desired and click [Forward] to continue. For the purpose of this article, I will not becreating any additional operating system accounts. I will be creating the "grid" and "oracle" user accounts later in this guide.

If you chose not to define any additional operating system user accounts, click [Continue] to acknowledge the warning dialog.

Sound CardThis screen will only appear if the wizard detects a sound card. On the sound card screen click [Forward] to continue.

Additional CDsOn the "Additional CDs" screen click [Finish] to continue.

Reboot SystemGiven we changed the SELinux option (to disabled), we are prompted to reboot the system. Click [OK] to reboot the system for normal use.

Login ScreenAfter rebooting the machine, you are presented with the login screen. Log in using the "root" user account and the password you provided during theinstallation.

Perform the same installation on the second nodeAfter completing the Linux installation on the first node, repeat the above steps for the second node ( racnode2). When configuring the machine name

and networking, ensure to configure the proper values. For my installation, this is what I configured for racnode2:

First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1.

Second, [Edit] both eth0 and eth1 as follows. Verify that the option "Enable IPv4 support" is selected. Click off the option to use "Dynamic IP

configuration (DHCP)" by selecting the "Manual configuration" radio button and configure a static IP address and Netmask for your environment. Click offthe option to "Enable IPv6 support". You may choose to use different IP addresses for both eth0 and eth1 that I have documented in this guide and that

is OK. Put eth1 (the interconnect) on a different subnet than eth0 (the public network):

eth0:- Check ON the option to [Enable IPv4 support]

- Check OFF the option to use [Dynamic IP configuration (DHCP)] - (select Manual configuration) IPv4 Address: 192.168.1.152

Prefix (Netmask): 255.255.255.0

- Check OFF the option to [Enable IPv6 support]

eth1:- Check ON the option to [Enable IPv4 support]- Check OFF the option to use [Dynamic IP configuration (DHCP)] - (select Manual configuration) IPv4 Address: 192.168.2.152

Prefix (Netmask): 255.255.255.0

- Check OFF the option to [Enable IPv6 support]

Continue by setting your hostname manually. I used " racnode2" for the second node. Finish this dialog off by supplying your gateway and DNS

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

11 of 33 3/6/2012 9:52 AM

Page 12: 11gR2 RAC Openfiler Install Page1 2 3

servers.

7. Install Required Linux Packages for Oracle RAC

Install the following required Linux packages on both Oracle RAC nodes in the cluster.

After installing Enterprise Linux, the next step is to verify and install all packages (RPMs) required by both Oracle Clusterware and Oracle RAC. TheOracle Universal Installer (OUI) performs checks on your machine during installation to verify that it meets the appropriate operating system packagerequirements. To ensure that these checks complete successfully, verify the software requirements documented in this section before starting the Oracleinstalls.

Although many of the required packages for Oracle were installed during the Enterprise Linux installation, several will be missing either because theywere considered optional within the package group or simply didn't exist in any package group!

The packages listed in this section (or later versions) are required for Oracle grid infrastructure 11g Release 2 and Oracle RAC 11g Release 2 running

on the Enterprise Linux 5 platform.

32-bit (x86) Installations

binutils-2.17.50.0.6compat-libstdc++-33-3.2.3elfutils-libelf-0.125elfutils-libelf-devel-0.125elfutils-libelf-devel-static-0.125gcc-4.1.2gcc-c++-4.1.2glibc-2.5-24glibc-common-2.5glibc-devel-2.52

glibc-headers-2.5kernel-headers-2.6.18ksh-20060214libaio-0.3.106libaio-devel-0.3.106libgcc-4.1.2libgomp-4.1.2libstdc++-4.1.2libstdc++-devel-4.1.2make-3.81sysstat-7.0.2unixODBC-2.2.11unixODBC-devel-2.2.11

Each of the packages listed above can be found on CD #1, CD #2, and CD #3 on the Enterprise Linux 5 - (x86) CDs. While it is possible to query eachindividual package to determine which ones are missing and need to be installed, an easier method is to run the rpm -Uvh PackageName command

from the five CDs as follows. For packages that already exist and are up to date, the RPM command will simply ignore the install and print a warningmessage to the console that the package is already installed.

#

From Enterprise Linux 5.4 (x86)- [CD #1]mkdir -p /media/cdrom

mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh binutils-2.*

rpm -Uvh elfutils-libelf-0.*

rpm -Uvh glibc-2.*

rpm -Uvh glibc-common-2.*

rpm -Uvh kernel-headers-2.*

rpm -Uvh ksh-2*

rpm -Uvh libaio-0.*

rpm -Uvh libgcc-4.*

rpm -Uvh libstdc++-4.*

rpm -Uvh make-3.*

cd /

eject

#

From Enterprise Linux 5.4 (x86) - [CD #2]mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh elfutils-libelf-devel-*

rpm -Uvh gcc-4.*

rpm -Uvh gcc-c++-4.*

rpm -Uvh glibc-devel-2.*

rpm -Uvh glibc-headers-2.*

rpm -Uvh libgomp-4.*

rpm -Uvh libstdc++-devel-4.*

rpm -Uvh unixODBC-2.*

cd /

eject

#

From Enterprise Linux 5.4 (x86) - [CD #3]mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh compat-libstdc++-33*

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

12 of 33 3/6/2012 9:52 AM

Page 13: 11gR2 RAC Openfiler Install Page1 2 3

rpm -Uvh libaio-devel-0.*

rpm -Uvh sysstat-7.*

rpm -Uvh unixODBC-devel-2.*

cd /

eject

64-bit (x86_64) Installations

binutils-2.17.50.0.6compat-libstdc++-33-3.2.3compat-libstdc++-33-3.2.3 (32 bit)elfutils-libelf-0.125elfutils-libelf-devel-0.125elfutils-libelf-devel-static-0.125gcc-4.1.2gcc-c++-4.1.2glibc-2.5-24glibc-2.5-24 (32 bit)glibc-common-2.5glibc-devel-2.5glibc-devel-2.5 (32 bit)glibc-headers-2.5ksh-20060214libaio-0.3.106libaio-0.3.106 (32 bit)

libaio-devel-0.3.106libaio-devel-0.3.106 (32 bit)libgcc-4.1.2libgcc-4.1.2 (32 bit)libstdc++-4.1.2libstdc++-4.1.2 (32 bit)libstdc++-devel 4.1.2make-3.81sysstat-7.0.2unixODBC-2.2.11unixODBC-2.2.11 (32 bit)unixODBC-devel-2.2.11unixODBC-devel-2.2.11 (32 bit)

Each of the packages listed above can be found on CD #1, CD #2, CD #3, and CD #4 on the Enterprise Linux 5 - (x86_64) CDs. While it is possible toquery each individual package to determine which ones are missing and need to be installed, an easier method is to run the rpm -Uvh PackageName

command from the six CDs as follows. For packages that already exist and are up to date, the RPM command will simply ignore the install and print awarning message to the console that the package is already installed.

#

From Enterprise Linux 5.4 (x86_64)- [CD #1]mkdir -p /media/cdrom

mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh binutils-2.*

rpm -Uvh elfutils-libelf-0.*

rpm -Uvh glibc-2.*

rpm -Uvh glibc-common-2.*

rpm -Uvh ksh-2*

rpm -Uvh libaio-0.*

rpm -Uvh libgcc-4.*

rpm -Uvh libstdc++-4.*

rpm -Uvh make-3.*

cd /

eject

#

From Enterprise Linux 5.4 (x86_64) - [CD #2]

mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh elfutils-libelf-devel-*

rpm -Uvh gcc-4.*

rpm -Uvh gcc-c++-4.*

rpm -Uvh glibc-devel-2.*

rpm -Uvh glibc-headers-2.*

rpm -Uvh libstdc++-devel-4.*

rpm -Uvh unixODBC-2.*

cd /

eject

#

From Enterprise Linux 5.4 (x86_64) - [CD #3]mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh compat-libstdc++-33*

rpm -Uvh libaio-devel-0.*

rpm -Uvh unixODBC-devel-2.*

cd /

eject

#

From Enterprise Linux 5.4 (x86_64) - [CD #4]

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

13 of 33 3/6/2012 9:52 AM

Page 14: 11gR2 RAC Openfiler Install Page1 2 3

mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh sysstat-7.*

cd /

eject

8. Network Configuration

Perform the following network configuration on both Oracle RAC nodes in the cluster.

Although we configured several of the network settings during the Linux installation, it is important to not skip this section as it contains critical steps tocheck that you have the networking hardware and Internet Protocol (IP) addresses required for an Oracle grid infrastructure for a cluster installation.

Network Hardware Requirements

The following is a list of hardware requirements for network configuration:

Each Oracle RAC node must have at least two network adapters or network interface cards (NICs): one for the public network interface, and one

for the private network interface (the interconnect). To use multiple NICs for the public network or for the private network, Oracle recommendsthat you use NIC bonding. Use separate bonding for the public and private networks (i.e. bond0 for the public network and bond1 for the private

network), because during installation each interface is defined as a public or private interface. NIC bonding is not covered in this article.

The public interface names associated with the network adapters for each network must be the same on all nodes, and the private interfacenames associated with the network adaptors should be the same on all nodes.

For example, with our two-node cluster, you cannot configure network adapters on racnode1 with eth0 as the public interface, but on

racnode2 have eth1 as the public interface. Public interface names must be the same, so you must configure eth0 as public on both nodes.

You should configure the private interfaces on the same network adapters as well. If eth1 is the private interface for racnode1, then eth1 must

be the private interface for racnode2.

For the public network, each network adapter must support TCP/IP.

For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches thatsupport TCP/IP (minimum requirement 1 Gigabit Ethernet).

UDP is the default interconnect protocol for Oracle RAC, and TCP is the interconnect protocol for Oracle Clusterware. You must use a switch forthe interconnect. Oracle recommends that you use a dedicated switch.

Oracle does not support token-rings or crossover cables for the interconnect.

For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. There should be nonode that is not connected to every private network interface. You can test if an interconnect interface is reachable using ping.

During installation of Oracle grid infrastructure, you are asked to identify the planned use for each network interface that OUI detects on yourcluster node. You must identify each interface as a public interface, a private interface, or not used and you must use the same private interfaces

for both Oracle Clusterware and Oracle RAC.

You can bond separate interfaces to a common interface to provide redundancy, in case of a NIC failure, but Oracle recommends that you do notcreate separate interfaces for Oracle Clusterware and Oracle RAC. If you use more than one NIC for the private interconnect, then Oraclerecommends that you use NIC bonding. Note that multiple private interfaces provide load balancing but not failover, unless bonded.

Starting with Oracle Clusterware 11g Release 2, you no longer need to provide a private name or IP address for the interconnect. IP addresses

on the subnet you identify as private are assigned as private IP addresses for cluster member nodes. You do not need to configure theseaddresses manually in a hosts directory. If you want name resolution for the interconnect, then you can configure private IP names in the hostsfile or the DNS. However, Oracle Clusterware assigns interconnect addresses on the interface defined during installation as the private interface (eth1, for example), and to the subnet used for the private subnet. In practice, and for the purpose of this guide, I will continue to include a private

name and IP address on each node for the RAC interconnect. It provides self-documentation and a set of end-points on the private network I canuse for troubleshooting purposes:

192.168.2.151 racnode1-priv

192.168.2.152 racnode2-priv

In a production environment that uses iSCSI for network storage, it is highly recommended to configure a redundant third network interface (eth2, for example) for that storage traffic using a TCP/IP offload Engine (TOE) card. For the sake of brevity, this article will configure the iSCSI

network storage traffic on the same network as the RAC private interconnect ( eth1). Combining the iSCSI storage traffic and cache fusion traffic

for Oracle RAC on the same network interface works great for an inexpensive test system but should never be considered for production.

The basic idea of a TOE is to offload the processing of TCP/IP protocols from the host processor to the hardware on the adapter or in thesystem. A TOE if often embedded in a network interface card (NIC) or a host bus adapter (HBA) and used to reduce the amount of TCP/IP

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

14 of 33 3/6/2012 9:52 AM

Page 15: 11gR2 RAC Openfiler Install Page1 2 3

processing handled by the CPU and server I/O subsystem and improve overall performance.

Assigning IP Address

Recall that each node requires at least two network interfaces configured one for the private IP address and one for the public IP address. Prior toOracle Clusterware 11g Release 2, all IP addresses needed to be manually assigned by the network administrator using static IP addresses never to

use DHCP. This would include the public IP address for the node, the RAC interconnect, virtual IP address (VIP), and new to 11g Release 2, the Single

Client Access Name (SCAN) IP address(s). In fact, in all of my previous articles, I would emphatically state that DHCP should never be used to assignany of these IP addresses. Well, in 11g Release 2, you now have two options that can used to assign IP addresses to each Oracle RAC node Grid

Naming Service (GNS) which uses DHCP or the traditional method of manually assigning static IP addresses using DNS.

Grid Naming Service (GNS)

Starting with Oracle Clusterware 11g Release 2, a second method for assigning IP addresses named Grid Naming Service (GNS) was introduced that

allows all private interconnect addresses, as well as most of the VIP addresses to be assigned using DHCP. GNS and DHCP are key elements toOracle's new Grid Plug and Play (GPnP) feature that, as Oracle states, eliminates per-node configuration data and the need for explicit add and deletenodes steps. GNS enables a dynamic grid infrastructure through the self-management of the network requirements for the cluster. While configuring IP

addresses using GNS certainly has its benefits and offers more flexibility over manually defining static IP addresses, it does come at the cost ofcomplexity and requires components not defined in this guide on building an inexpensive Oracle RAC. For example, activating GNS in a cluster requiresa DHCP server on the public network which I felt was out of the scope of this article.

To learn more about the benefits and how to configure GNS, please see Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux .

Manually Assigning Static IP Address - (The DNS Method)

If you choose not to use GNS, manually defining static IP addresses is still available with Oracle Clusterware 11g Release 2 and will be the method used

in this article to assign all required Oracle Clusterware networking components (public IP address for the node, RAC interconnect, virtual IP address, andSCAN).

Notice that the title of this section includes the phrase "The DNS Method". Oracle recommends that static IP addresses be manually configured in adomain name server (DNS) before starting the Oracle grid infrastructure installation. However, when building an inexpensive Oracle RAC, it is not alwayspossible you will have access to a DNS server. Previous to 11g Release 2, this would not present a huge obstacle as it was possible to define each IP

address in the host file ( /etc/hosts) on all nodes without the use of DNS. This would include public IP address for the node, the RAC interconnect,

and the virtual IP address (VIP).

Things, however, change a bit in Oracle grid infrastructure 11g Release 2.

Let's start with the RAC private interconnect. It is no longer a requirement to provide a private name or IP address for the interconnect during the Oraclegrid infrastructure install (i.e. racnode1-priv or racnode2-priv). Oracle Clusterware now assigns interconnect addresses on the interface defined

during installation as the private interface ( eth1, for example), and to the subnet used for the private subnet, which for this article is 192.168.2.0. If

you want name resolution for the interconnect, then you can configure private IP names in the hosts file or the DNS. In practice, and for the purpose ofthis guide, I will continue to include a private name and IP address on each node for the RAC interconnect. It provides self-documentation and a set ofend-points on the private network I can use for troubleshooting purposes:

192.168.2.151 racnode1-priv

192.168.2.152 racnode2-priv

The public IP address for the node and the virtual IP address (VIP) remain the same in 11g Release 2. Oracle recommends defining the name and IP

address for each to be resolved through DNS and included in the hosts file for each node. With the current release of Oracle grid infrastructure andprevious releases, Oracle Clusterware has no problem resolving the public IP address for the node and the VIP using only a hosts file:

192.168.1.151 racnode1

192.168.1.251 racnode1-vip

192.168.1.152 racnode2

192.168.1.252 racnode2-vip

The Single Client Access Name (SCAN) virtual IP is new to 11g Release 2 and seems to be the one causing the most discussion! The SCAN must be

configured in GNS or DNS for Round Robin resolution to three addresses (recommended) or at least one address. If you choose not to use GNS, thenOracle states the SCAN must be resolved through DNS and not through the hosts file. If the SCAN cannot be resolved through DNS (or GNS), theCluster Verification Utility check will fail during the Oracle grid infrastructure installation. If you do not have access to a DNS, I provide an easyworkaround in the section Configuring SCAN without DNS. The workaround involves modifying the nslookup utility and should be performed before

installing Oracle grid infrastructure.

Single Client Access Name (SCAN) for the Cluster

If you have ever been tasked with extending an Oracle RAC cluster by adding a new node (or shrinking a RAC cluster by removing a node), then youknow the pain of going through a list of all clients and updating their SQL*Net or JDBC configuration to reflect the new or deleted node! To address thisproblem, Oracle 11g Release 2 introduced a new feature known as Single Client Access Name or SCAN for short. SCAN is a new feature that provides

a single host name for clients to access an Oracle Database running in a cluster. Clients using SCAN do not need to change their TNS configuration ifyou add or remove nodes in the cluster. The SCAN resource and its associated IP address(s) provide a stable name for clients to use for connections,independent of the nodes that make up the cluster. You will be asked to provide the host name and up to three IP addresses to be used for the SCANresource during the interview phase of the Oracle grid infrastructure installation. For high availability and scalability, Oracle recommends that youconfigure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.

The SCAN virtual IP name is similar to the names used for a node's virtual IP addresses, such as racnode1-vip. However, unlike a virtual IP, the

SCAN is associated with the entire cluster, rather than an individual node, and can be associated with multiple IP addresses, not just one address. Notethat SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.

The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service(DNS) resolution.

In this article, I will configure SCAN to resolve to only one, manually configured static IP address using the DNS method ( but not actually defining it inDNS):

192.168.1.187 racnode-cluster-scan

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

15 of 33 3/6/2012 9:52 AM

Page 16: 11gR2 RAC Openfiler Install Page1 2 3

Configuring Public and Private Network

In our two node example, we need to configure the network on both Oracle RAC nodes for access to the public network as well as their privateinterconnect.

The easiest way to configure network settings in Enterprise Linux is with the program "Network Configuration". Network Configuration is a GUIapplication that can be started from the command-line as the "root" user account as follows:

[root@racnode1 ~]#

/usr/bin/system-config-network &

Using the Network Configuration application, you need to configure both NIC devices as well as the /etc/hosts file. Both of these tasks can be

completed using the Network Configuration GUI. Notice that the /etc/hosts settings are the same for both nodes and that I removed any entry that

has to do with IPv6. For example:

::1 localhost6.localdomain6 localhost6

Our example Oracle RAC configuration will use the following network settings:

Oracle RAC Node 1 - (racnode1)

Device IP Address Subnet Gateway Purpose

eth0 192.168.1.151 255.255.255.0 192.168.1.1 Connects racnode1 to the public network

eth1 192.168.2.151 255.255.255.0 Connects racnode1 (interconnect) to racnode2(racnode2-priv)

/etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)

192.168.1.151 racnode1

192.168.1.152 racnode2

# Private Interconnect - (eth1)

192.168.2.151 racnode1-priv

192.168.2.152 racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)

192.168.1.251 racnode1-vip

192.168.1.252 racnode2-vip

# Single Client Access Name (SCAN)

192.168.1.187 racnode-cluster-scan

# Private Storage Network for Openfiler - (eth1)

192.168.1.195 openfiler1

192.168.2.195 openfiler1-priv

# Miscellaneous Nodes

192.168.1.1 router

192.168.1.105 packmule

192.168.1.106 melody

192.168.1.121 domo

192.168.1.122 switch1

192.168.1.125 oemprod

192.168.1.245 accesspoint

Oracle RAC Node 2 - (racnode2)

Device IP Address Subnet Gateway Purpose

eth0 192.168.1.152 255.255.255.0 192.168.1.1 Connects racnode2 to the public network

eth1 192.168.2.152 255.255.255.0 Connects racnode2 (interconnect) to racnode1(racnode1-priv)

/etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)

192.168.1.151 racnode1

192.168.1.152 racnode2

# Private Interconnect - (eth1)

192.168.2.151 racnode1-priv

192.168.2.152 racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)

192.168.1.251 racnode1-vip

192.168.1.252 racnode2-vip

# Single Client Access Name (SCAN)

192.168.1.187 racnode-cluster-scan

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

16 of 33 3/6/2012 9:52 AM

Page 17: 11gR2 RAC Openfiler Install Page1 2 3

# Private Storage Network for Openfiler - (eth1)

192.168.1.195 openfiler1

192.168.2.195 openfiler1-priv

# Miscellaneous Nodes

192.168.1.1 router

192.168.1.105 packmule

192.168.1.106 melody

192.168.1.121 domo

192.168.1.122 switch1

192.168.1.125 oemprod

192.168.1.245 accesspoint

In the screen shots below, only Oracle RAC Node 1 (racnode1) is shown. Be sure to make all the proper network settings to both Oracle RAC nodes.

Figure 2: Network Configuration Screen, Node 1 (racnode1)

Figure 3: Ethernet Device Screen, eth0 (racnode1)

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

17 of 33 3/6/2012 9:52 AM

Page 18: 11gR2 RAC Openfiler Install Page1 2 3

Figure 4: Ethernet Device Screen, eth1 (racnode1)

Figure 5: Network Configuration Screen, /etc/hosts (racnode1)

Once the network is configured, you can use the ifconfig command to verify everything is working. The following example is from racnode1:

[root@racnode1 ~]#

/sbin/ifconfig -a

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

18 of 33 3/6/2012 9:52 AM

Page 19: 11gR2 RAC Openfiler Install Page1 2 3

eth0 Link encap:Ethernet HWaddr 00:14:6C:76:5C:71

inet addr:192.168.1.151 Bcast:192.168.1.255 Mask:255.255.255.0

inet6 addr: fe80::214:6cff:fe76:5c71/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:759780 errors:0 dropped:0 overruns:0 frame:0

TX packets:771948 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:672708275 (641.5 MiB) TX bytes:727861314 (694.1 MiB)

Interrupt:177 Base address:0xcf00

eth1 Link encap:Ethernet HWaddr 00:0E:0C:64:D1:E5

inet addr:192.168.2.151 Bcast:192.168.2.255 Mask:255.255.255.0

inet6 addr: fe80::20e:cff:fe64:d1e5/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:120 errors:0 dropped:0 overruns:0 frame:0

TX packets:48 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:24544 (23.9 KiB) TX bytes:8634 (8.4 KiB)

Base address:0xddc0 Memory:fe9c0000-fe9e0000

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:3191 errors:0 dropped:0 overruns:0 frame:0

TX packets:3191 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:4296868 (4.0 MiB) TX bytes:4296868 (4.0 MiB)

sit0 Link encap:IPv6-in-IPv4

NOARP MTU:1480 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

Confirm the RAC Node Name is Not Listed in Loopback Address

Ensure that the node names ( racnode1 or racnode2) are not included for the loopback address in the /etc/hosts file. If the machine name is listed

in the in the loopback address entry as below:

127.0.0.1

racnode1

localhost.localdomain localhost

it will need to be removed as shown below:

127.0.0.1 localhost.localdomain localhost

If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

Check and turn off UDP ICMP rejections

During the Linux installation process, I indicated to not configure the firewall option. By default the option to configure a firewall is selected by theinstaller. This has burned me several times so I like to do a double-check that the firewall option is not configured and to ensure udp ICMP filtering isturned off.

If UDP ICMP is blocked or rejected by the firewall, the Oracle Clusterware software will crash after several minutes of running. When the OracleClusterware process fails, you will have something similar to the following in the <machine_name>_evmocr.log file:

08/29/2005 22:17:19

oac_init:2: Could not connect to server, clsc retcode = 9

08/29/2005 22:17:19

a_init:12!: Client init unsuccessful : [32]

ibctx:1:ERROR: INVALID FORMAT

proprinit:problem reading the bootblock or superbloc 22

When experiencing this type of error, the solution is to remove the UDP ICMP (iptables) rejection rule - or to simply have the firewall option turned off.The Oracle Clusterware software will then start to operate normally and not crash. The following commands should be executed as the root user

account:

1.

Check to ensure that the firewall option is turned off. If the firewall option is stopped (like it is in my example below) you do not have to proceedwith the following steps.

[root@racnode1 ~]#

/etc/rc.d/init.d/iptables statusFirewall is stopped.

2.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

19 of 33 3/6/2012 9:52 AM

Page 20: 11gR2 RAC Openfiler Install Page1 2 3

If the firewall option is operating you will need to first manually disable UDP ICMP rejections:

[root@racnode1 ~]#

/etc/rc.d/init.d/iptables stopFlushing firewall rules: [

OK ]

Setting chains to policy ACCEPT: filter [

OK ]

Unloading iptables modules: [

OK ]

3.

Then, to turn UDP ICMP rejections off for next server reboot (which should always be turned off):

[root@racnode1 ~]#

chkconfig iptables off

4.

9. Cluster Time Synchronization Service

Perform the following Cluster Time Synchronization Service configuration on both Oracle RAC nodes in the cluster.

Oracle Clusterware 11g Release 2 and later requires time synchronization across all nodes within a cluster where Oracle RAC is deployed. Oracle

provide two options for time synchronization: an operating system configured network time protocol (NTP), or the new Oracle Cluster TimeSynchronization Service (CTSS). Oracle Cluster Time Synchronization Service (ctssd) is designed for organizations whose Oracle RAC databases areunable to access NTP services.

Configuring NTP is outside the scope of this article and will therefore rely on the Cluster Time Synchronization Service as the network time protocol.

Configure Cluster Time Synchronization Service - (CTSS)

If you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then de-configure and de-install the NetworkTime Protocol (NTP).

To deactivate the NTP service, you must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To

complete these steps on Oracle Enterprise Linux, run the following commands as the root user on both Oracle RAC nodes:

[root@racnode1 ~]#

/sbin/service ntpd stop[root@racnode1 ~]#

chkconfig ntpd off[root@racnode1 ~]#

mv /etc/ntp.conf /etc/ntp.conf.original

Also remove the following file:

[root@racnode1 ~]#

rm /var/run/ntpd.pid

This file maintains the pid for the NTP daemon.

When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is automatically installed in active mode andsynchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode, and no

active time synchronization is performed by Oracle Clusterware within the cluster.

To confirm that ctssd is active after installation, enter the following command as the Grid installation owner ( grid):

[grid@racnode1 ~]$

crsctl check ctssCRS-4701: The Cluster Time Synchronization Service is in Active mode.

CRS-4702: Offset (in msec): 0

Configure Network Time Protocol - (only if not using CTSS as documented above)

Note: Please note that this guide will use Cluster Time Synchronization Service for time synchronization across both Oracle RAC nodes in the cluster.This section is provided for documentation purposes only and can be used by organizations already setup to use NTP within their domain.

If you are using NTP, and you prefer to continue using it instead of Cluster Time Synchronization Service, then you need to modify the NTP initializationfile to set the -x flag, which prevents time from being adjusted backward. Restart the network time protocol daemon after you complete this task.

To do this on Oracle Enterprise Linux, Red Hat Linux, and Asianux systems, edit the /etc/sysconfig/ntpd file to add the -x flag, as in the following

example:

# Drop root to id 'ntp:ntp' by default.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate

SYNC_HWCLOCK=no

# Additional options for ntpdate

NTPDATE_OPTIONS=""

Then, restart the NTP service.

#

/sbin/service ntp restart

On SUSE systems, modify the configuration file /etc/sysconfig/ntp with the following settings:

NTPD_OPTIONS="-x -u ntp"

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

20 of 33 3/6/2012 9:52 AM

Page 21: 11gR2 RAC Openfiler Install Page1 2 3

Restart the daemon using the following command:

#

service ntp restart

10. Install Openfiler

Perform the following installation on the network storage server (openfiler1).

With the network configured on both Oracle RAC nodes, the next step is to install the Openfiler software to the network storage server ( openfiler1).

Later in this article, the network storage server will be configured as an iSCSI storage device for all Oracle Clusterware and Oracle RAC shared storagerequirements.

Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-based Network Attached Storage (NAS)and block-based Storage Area Networking (SAN) in a single framework. The entire software stack interfaces with open source applications such asApache, Samba, LVM2, ext3, Linux NFS and iSCSI Enterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to managesolution fronted by a powerful web-based management interface.

Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implement an inexpensive SAN for theshared storage components required by Oracle RAC 11g. The operating system and Openfiler application will be installed on one internal SATA disk. A

second internal 73GB 15K SCSI hard disk will be configured as a single "Volume Group" that will be used for all shared disk storage requirements. TheOpenfiler server will be configured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store the

shared files required by Oracle Clusterware and the Oracle RAC database.

Please be aware that any type of hard disk (internal or external) should work for database storage as long as it can be recognized by the network storageserver (Openfiler) and has adequate space. For example, I could have made an extra partition on the 500GB internal SATA disk for the iSCSI target, butdecided to make use of the faster SCSI disk for this example.

To learn more about Openfiler, please visit their website at http://www.openfiler.com/

Download Openfiler

Use the links below to download Openfiler NAS/SAN Appliance, version 2.3 (Final Release) for either x86 or x86_64 depending on your hardwarearchitecture. This guide uses x86_64. After downloading Openfiler, you will then need to burn the ISO image to CD.

32-bit (x86) Installations

openfiler-2.3-x86-disc1.iso (322 MB)

64-bit (x86_64) Installations

openfiler-2.3-x86_64-disc1.iso (336 MB)

If you are downloading the above ISO file to a MS Windows machine, there are many options for burning these images (ISO files) to a CD. You mayalready be familiar with and have the proper software to burn images to CD. If you are not familiar with this process and do not have the requiredsoftware to burn images to CD, here are just two (of many) software packages that can be used:

UltraISOMagic ISO Maker

Install Openfiler

This section provides a summary of the screens used to install the Openfiler software. For the purpose of this article, I opted to install Openfiler with alldefault options. The only manual change required was for configuring the local network settings.

Once the install has completed, the server will reboot to make sure all required components, services and drivers are started and recognized. After thereboot, any external hard drives (if connected) will be discovered by the Openfiler server.

For more detailed installation instructions, please visit http://www.openfiler.com/learn/. I would suggest, however, that the instructions I have providedbelow be used for this Oracle RAC 11g configuration.

Before installing the Openfiler software to the network storage server, you should have both NIC interfaces (cards) installed and any external hard drivesconnected and turned on (if you will be using external hard drives).

After downloading and burning the Openfiler ISO image (ISO file) to CD, insert the CD into the network storage server ( openfiler1 in this example),

power it on, and answer the installation screen prompts as noted below.

Boot ScreenThe first screen is the Openfiler boot screen. At the boot: prompt, hit [Enter] to start the installation process.

Media TestWhen asked to test the CD media, tab over to [Skip] and hit [Enter]. If there were any errors, the media burning software would have warned us. Afterseveral seconds, the installer should then detect the video card, monitor, and mouse. The installer then goes into GUI mode.

Welcome to Openfiler NSAAt the welcome screen, click [Next] to continue.

Keyboard ConfigurationThe next screen prompts you for the Keyboard settings. Make the appropriate selection for your configuration.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

21 of 33 3/6/2012 9:52 AM

Page 22: 11gR2 RAC Openfiler Install Page1 2 3

Disk Partitioning SetupThe next screen asks whether to perform disk partitioning using "Automatic Partitioning" or "Manual Partitioning with Disk Druid". Although the officialOpenfiler documentation suggests to use Manual Partitioning, I opted to use "Automatic Partitioning" given the simplicity of my example configuration.

Select [Automatically partition] and click [Next] continue.

Automatic PartitioningIf there were a previous installation of Linux on this machine, the next screen will ask if you want to "remove" or "keep" old partitions. Select the option to[Remove all partitions on this system]. For my example configuration, I selected ONLY the 500GB SATA internal hard drive [sda] for the operating systemand Openfiler application installation. I de-selected the 73GB SCSI internal hard drive since this disk will be used exclusively in the next section to createa single "Volume Group" that will be used for all iSCSI based shared disk storage requirements for Oracle Clusterware and Oracle RAC.

I also keep the checkbox [Review (and modify if needed) the partitions created] selected. Click [Next] to continue.

You will then be prompted with a dialog window asking if you really want to remove all partitions. Click [Yes] to acknowledge this warning.

PartitioningThe installer will then allow you to view (and modify if needed) the disk partitions it automatically chose for hard disks selected in the previous screen. Inalmost all cases, the installer will choose 100MB for /boot, an adequate amount of swap, and the rest going to the root ( /) partition for that disk (or

disks). In this example, I am satisfied with the installers recommended partitioning for /dev/sda.

The installer will also show any other internal hard disks it discovered. For my example configuration, the installer found the 73GB SCSI internal harddrive as /dev/sdb. For now, I will "Delete" any and all partitions on this drive (there was only one, /dev/sdb1). In the next section, I will create the

required partition for this particular hard disk.

Network ConfigurationI made sure to install both NIC interfaces (cards) in the network storage server before starting the Openfiler installation. This screen should havesuccessfully detected each of the network devices.

First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activate eth1 by default.

Second, [Edit] both eth0 and eth1 as follows. You may choose to use different IP addresses for both eth0 and eth1 and that is OK. You must,

however, configure eth1 (the storage network) to be on the same subnet you configured for eth1 on racnode1 and racnode2:

eth0:

- Check off the option to [Configure using DHCP]- Leave the [Activate on boot] checked- IP Address: 192.168.1.195

- Netmask: 255.255.255.0

eth1:- Check off the option to [Configure using DHCP]- Leave the [Activate on boot] checked- IP Address: 192.168.2.195

- Netmask: 255.255.255.0

Continue by setting your hostname manually. I used a hostname of " openfiler1". Finish this dialog off by supplying your gateway and DNS servers.

Time Zone SelectionThe next screen allows you to configure your time zone information. Make the appropriate selection for your location.

Set Root PasswordSelect a root password and click [Next] to continue.

About to InstallThis screen is basically a confirmation screen. Click [Next] to start the installation.

CongratulationsAnd that's it. You have successfully installed Openfiler on the network storage server. The installer will eject the CD from the CD-ROM drive. Take out theCD and click [Reboot] to reboot the system.

If everything was successful after the reboot, you should now be presented with a text login screen and the URL to use for administering the Openfilerserver.

Modify /etc/hosts File on Openfiler ServerAlthough not mandatory, I typically copy the contents of the /etc/hosts file from one of the Oracle RAC nodes to the new Openfiler server. This allows

convenient name resolution when testing the network for the cluster.

11. Configure iSCSI Volumes using Openfiler

Perform the following configuration tasks on the network storage server (openfiler1).

Openfiler administration is performed using the Openfiler Storage Control Center a browser based tool over an https connection on port 446. For

example:

https://openfiler1.idevelopment.info:446/

From the Openfiler Storage Control Center home page, log in as an administrator. The default administration login credentials for Openfiler are:

Username: openfiler

Password: password

The first page the administrator sees is the [Status] / [System Information] screen.

To use Openfiler as an iSCSI storage server, we have to perform six major tasks; set up iSCSI services, configure network access, identify and partition

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

22 of 33 3/6/2012 9:52 AM

Page 23: 11gR2 RAC Openfiler Install Page1 2 3

the physical storage, create a new volume group, create all logical volumes, and finally, create new iSCSI targets for each of the logical volumes.

Services

To control services, we use the Openfiler Storage Control Center and navigate to [Services] / [Manage Services]:

Figure 6: Enable iSCSI Openfiler Service

To enable the iSCSI service, click on the 'Enable' link under the 'iSCSI target server' service name. After that, the 'iSCSI target server' status shouldchange to ' Enabled '.

The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage system on Linux. With the iSCSI

target enabled, we should be able to SSH into the Openfiler server and see the iscsi-target service running:

[root@openfiler1 ~]#

service iscsi-target status

ietd (pid 14243) is running...

Network Access Configuration

The next step is to configure network access in Openfiler to identify both Oracle RAC nodes ( racnode1 and racnode2) that will need to access the

iSCSI volumes through the storage (private) network. Note that iSCSI logical volumes will be created later on in this section. Also note that this step doesnot actually grant the appropriate permissions to the iSCSI volumes required by both Oracle RAC nodes. That will be accomplished later in this sectionby updating the ACL for each new logical volume.

As in the previous section, configuring network access is accomplished using the Openfiler Storage Control Center by navigating to [System] / [NetworkSetup]. The "Network Access Configuration" section (at the bottom of the page) allows an administrator to setup networks and/or hosts that will beallowed to access resources exported by the Openfiler appliance. For the purpose of this article, we will want to add both Oracle RAC nodes individuallyrather than allowing the entire 192.168.2.0 network have access to Openfiler resources.

When entering each of the Oracle RAC nodes, note that the 'Name' field is just a logical name used for reference only. As a convention when enteringnodes, I simply use the node name defined for that IP address. Next, when entering the actual node in the 'Network/Host' field, always use its IP addresseven though its host name may already be defined in your /etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a

subnet mask of 255.255.255.255.

It is important to remember that you will be entering the IP address of the private network ( eth1) for each of the RAC nodes in the cluster.

The following image shows the results of adding both Oracle RAC nodes:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

23 of 33 3/6/2012 9:52 AM

Page 24: 11gR2 RAC Openfiler Install Page1 2 3

Figure 7: Configure Openfiler Network Access for Oracle RAC Nodes

Physical Storage

In this section, we will be creating the three iSCSI volumes to be used as shared storage by both of the Oracle RAC nodes in the cluster. This involvesmultiple steps that will be performed on the internal 73GB 15K SCSI hard disk connected to the Openfiler server.

Storage devices like internal IDE/SATA/SCSI/SAS disks, storage arrays, external USB drives, external FireWire drives, or ANY other storage can beconnected to the Openfiler server and served to the clients. Once these devices are discovered at the OS level, Openfiler Storage Control Center can beused to set up and manage all of that storage.

In our case, we have a 73GB internal SCSI hard drive for our shared storage needs. On the Openfiler server this drive is seen as /dev/sdb (MAXTOR

ATLAS15K2_73SCA). To see this and to start the process of creating our iSCSI volumes, navigate to [Volumes] / [Block Devices] from the OpenfilerStorage Control Center:

Figure 8: Openfiler Physical Storage - Block Device Management

Partitioning the Physical Disk

The first step we will perform is to create a single primary partition on the /dev/sdb internal hard disk. By clicking on the /dev/sdb link,

we are presented with the options to 'Edit' or 'Create' a partition. Since we will be creating a single primary partition that spans the entiredisk, most of the options can be left to their default setting where the only modification would be to change the ' Partition Type ' from'Extended partition' to ' Physical volume'. Here are the values I specified to create the primary partition on /dev/sdb:

Mode: PrimaryPartition Type: Physical volumeStarting Cylinder: 1Ending Cylinder: 8924

The size now shows 68.36 GB. To accept that we click on the "Create" button. This results in a new partition ( /dev/sdb1) on our

internal hard disk:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

24 of 33 3/6/2012 9:52 AM

Page 25: 11gR2 RAC Openfiler Install Page1 2 3

Figure 9: Partition the Physical Volume

Volume Group Management

The next step is to create a Volume Group. We will be creating a single volume group named racdbvg that contains the newly created

primary partition.

From the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Groups]. There we would see any existing volume groups, ornone as in our case. Using the Volume Group Management screen, enter the name of the new volume group ( racdbvg), click on the

checkbox in front of /dev/sdb1 to select that partition, and finally click on the 'Add volume group' button. After that we are presented with

the list that now shows our newly created volume group named " racdbvg":

Figure 10: New Volume Group Created

Logical Volumes

We can now create the three logical volumes in the newly created volume group ( racdbvg).

From the Openfiler Storage Control Center, navigate to [Volumes] / [Add Volume]. There we will see the newly created volume group (racdbvg) along with its block storage statistics. Also available at the bottom of this screen is the option to create a new volume in the

selected volume group - (Create a volume in "racdbvg"). Use this screen to create the following three logical (iSCSI) volumes. After

creating each logical volume, the application will point you to the "Manage Volumes" screen. You will then need to click back to the "AddVolume" tab to create the next logical volume until all three iSCSI volumes are created:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

25 of 33 3/6/2012 9:52 AM

Page 26: 11gR2 RAC Openfiler Install Page1 2 3

iSCSI / Logical Volumes

Volume Name Volume Description Required Space (MB) Filesystem Type

racdb-crs1 racdb - ASM CRS Volume 1 2,208 iSCSI

racdb-data1 racdb - ASM Data Volume 1 33,888 iSCSI

racdb-fra1 racdb - ASM FRA Volume 1 33,888 iSCSI

In effect we have created three iSCSI disks that can now be presented to iSCSI clients ( racnode1 and racnode2) on the network. The

"Manage Volumes" screen should look as follows:

Figure 11: New Logical (iSCSI) Volumes

iSCSI Targets

At this point, we have three iSCSI logical volumes. Before an iSCSI client can have access to them, however, an iSCSI target will need to be created foreach of these three volumes. Each iSCSI logical volume will be mapped to a specific iSCSI target and the appropriate network access permissions to that

target will be granted to both Oracle RAC nodes. For the purpose of this article, there will be a one-to-one mapping between an iSCSI logical volume andan iSCSI target.

There are three steps involved in creating and configuring an iSCSI target; create a unique Target IQN (basically, the universal name for the new iSCSItarget), map one of the iSCSI logical volumes created in the previous section to the newly created iSCSI target, and finally, grant both of the Oracle RACnodes access to the new iSCSI target. Please note that this process will need to be performed for each of the three iSCSI logical volumes created in theprevious section.

For the purpose of this article, the following table lists the new iSCSI target names (the Target IQN) and which iSCSI logical volume it will be mapped to:

iSCSI Target / Logical Volume Mappings

Target IQN iSCSI Volume Name Volume Description

iqn.2006-01.com.openfiler:racdb.crs1 racdb-crs1 racdb - ASM CRS Volume 1

iqn.2006-01.com.openfiler:racdb.data1 racdb-data1 racdb - ASM Data Volume 1

iqn.2006-01.com.openfiler:racdb.fra1 racdb-fra1 racdb - ASM FRA Volume 1

We are now ready to create the three new iSCSI targets - one for each of the iSCSI logical volumes. The example below illustrates the three stepsrequired to create a new iSCSI target by creating the Oracle Clusterware / racdb-crs1 target ( iqn.2006-01.com.openfiler:racdb.crs1). This

three step process will need to be repeated for each of the three new iSCSI targets listed in the table above.

Create New Target IQN

From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Verify the grey sub-tab "Target Configuration" is selected. This pageallows you to create a new iSCSI target. A default value is automatically generated for the name of the new iSCSI target (better known as the "TargetIQN"). An example Target IQN is " iqn.2006-01.com.openfiler:tsn.ae4683b67fd3":

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

26 of 33 3/6/2012 9:52 AM

Page 27: 11gR2 RAC Openfiler Install Page1 2 3

Figure 12: Create New iSCSI Target : Default Target IQN

I prefer to replace the last segment of the default Target IQN with something more meaningful. For the first iSCSI target (Oracle Clusterware / racdb-crs1), I will modify the default Target IQN by replacing the string " tsn.ae4683b67fd3" with " racdb.crs1" as shown in Figure 13 below:

Figure 13: Create New iSCSI Target : Replace Default Target IQN

Once you are satisfied with the new Target IQN, click the "Add" button. This will create a new iSCSI target and then bring up a page that allows you tomodify a number of settings for the new iSCSI target. For the purpose of this article, none of settings for the new iSCSI target need to be changed.

LUN Mapping

After creating the new iSCSI target, the next step is to map the appropriate iSCSI logical volumes to it. Under the "Target Configuration" sub-tab, verifythe correct iSCSI target is selected in the section "Select iSCSI Target". If not, use the pull-down menu to select the correct iSCSI target and hit the"Change" button.

Next, click on the grey sub-tab named "LUN Mapping" (next to "Target Configuration" sub-tab). Locate the appropriate iSCSI logical volume (/dev/racdbvg/racdb-crs1 in this case) and click the "Map" button. You do not need to change any settings on this page. Your screen should look

similar to Figure 14 after clicking the "Map" button for volume /dev/racdbvg/racdb-crs1:

Figure 14: Create New iSCSI Target : Map LUN

Network ACL

Before an iSCSI client can have access to the newly created iSCSI target, it needs to be granted the appropriate permissions. Awhile back, weconfigured network access in Openfiler for two hosts (the Oracle RAC nodes). These are the two nodes that will need to access the new iSCSI targetsthrough the storage (private) network. We now need to grant both of the Oracle RAC nodes access to the new iSCSI target.

Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the current iSCSI target, change the "Access" for both hosts from'Deny' to 'Allow' and click the 'Update' button:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

27 of 33 3/6/2012 9:52 AM

Page 28: 11gR2 RAC Openfiler Install Page1 2 3

Figure 15: Create New iSCSI Target : Update Network ACL

Go back to the Create New Target IQN section and perform these three tasks for the remaining two iSCSI logical volumes while substituting the valuesfound in the " iSCSI Target / Logical Volume Mappings" table .

12. Configure iSCSI Volumes on Oracle RAC Nodes

Configure the iSCSI initiator on both Oracle RAC nodes in the cluster. Creating partitions, however, should only be executed on one of nodes in the

RAC cluster.

An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In our case, the clients aretwo Linux servers, racnode1 and racnode2, running Oracle Enterprise Linux 5.4.

In this section we will be configuring the iSCSI software initiator on both of the Oracle RAC nodes. Oracle Enterprise Linux 5.4 includes the Open-iSCSIiSCSI software initiator which can be found in the iscsi-initiator-utils RPM. This is a change from previous versions of Oracle Enterprise Linux

(4.x) which included the Linux iscsi-sfnet software driver developed as part of the Linux-iSCSI Project. All iSCSI management tasks like discovery andlogins will use the command-line interface iscsiadm which is included with Open-iSCSI.

The iSCSI software initiator will be configured to automatically log in to the network storage server ( openfiler1) and discover the iSCSI volumes

created in the previous section. We will then go through the steps of creating persistent local SCSI device names (i.e. /dev/iscsi/crs1) for each of

the iSCSI target names discovered using udev. Having a consistent local SCSI device name and which iSCSI target it maps to, helps to differentiate

between the three volumes when configuring ASM. Before we can do any of this, however, we must first install the iSCSI initiator software.

Note: This guide makes use of ASMLib 2.0 which is a support library for the Automatic Storage Management (ASM) feature of the Oracle Database.ASMLib will be used to label all iSCSI volumes used in this guide. By default, ASMLib already provides persistent paths and permissions for storagedevices used with ASM. This feature eliminates the need for updating udev or devlabel files with storage device paths and permissions. For the

purpose of this article and in practice, I still opt to create persistent local SCSI device names for each of the iSCSI target names discovered using udev.

This provides a means of self-documentation which helps to quickly identify the name and location of each volume.

Installing the iSCSI (initiator) service

With Oracle Enterprise Linux 5.4, the Open-iSCSI iSCSI software initiator does not get installed by default. The software is included in the iscsi-

initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most cases, it will not be), perform the

following on both Oracle RAC nodes:

[root@racnode1 ~]#

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-utils

If the iscsi-initiator-utils package is not installed, load CD #1 into each of the Oracle RAC nodes and perform the following:

[root@racnode1 ~]#

mount -r /dev/cdrom /media/cdrom[root@racnode1 ~]#

cd /media/cdrom/Server[root@racnode1 ~]#

rpm -Uvh iscsi-initiator-utils-*[root@racnode1 ~]#

cd /

[root@racnode1 ~]#

eject

Verify the iscsi-initiator-utils package is now installed:

[root@racnode1 ~]#

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiator-utilsiscsi-initiator-utils-6.2.0.871-0.10.el5 (x86_64)

Configure the iSCSI (initiator) service

After verifying that the iscsi-initiator-utils package is installed on both Oracle RAC nodes, start the iscsid service and enable it to

automatically start when the system boots. We will also configure the iscsi service to automatically start which logs into iSCSI targets needed at

system startup.

[root@racnode1 ~]#

service iscsid startTurning off network shutdown. Starting iSCSI daemon: [

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

28 of 33 3/6/2012 9:52 AM

Page 29: 11gR2 RAC Openfiler Install Page1 2 3

OK ]

[

OK ]

[root@racnode1 ~]#

chkconfig iscsid on[root@racnode1 ~]#

chkconfig iscsi on

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server. This

should be performed on both Oracle RAC nodes to verify the configuration is functioning properly:

[root@racnode1 ~]#

iscsiadm -m discovery -t sendtargets -p openfiler1-priv192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1

192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1

192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1

Manually Log In to iSCSI Targets

At this point the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover the available targets from the networkstorage server. The next step is to manually log in to each of the available targets which can be done using the iscsiadm command-line interface. This

needs to be run on both Oracle RAC nodes. Note that I had to specify the IP address and not the host name of the network storage server (openfiler1-priv) - I believe this is required given the discovery (above) shows the targets using the IP address.

[root@racnode1 ~]#

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 -l[root@racnode1 ~]#

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 -l[root@racnode1 ~]#

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 -l

Configure Automatic Log In

The next step is to ensure the client will automatically log in to each of the targets listed above when the machine is booted (or the iSCSI initiator serviceis started/restarted). As with the manual log in process described above, perform the following on both Oracle RAC nodes:

[root@racnode1 ~]#

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op update -n node.startup -v automatic[root@racnode1 ~]#

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op update -n node.startup -v automatic[root@racnode1 ~]#

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op update -n node.startup -v automatic

Create Persistent Local SCSI Device Names

In this section, we will go through the steps to create persistent local SCSI device names for each of the iSCSI target names. This will be done usingudev. Having a consistent local SCSI device name and which iSCSI target it maps to, helps to differentiate between the three volumes when configuring

ASM. Although this is not a strict requirement since we will be using ASMLib 2.0 for all volumes, it provides a means of self-documentation to quicklyidentify the name and location of each iSCSI volume.

When either of the Oracle RAC nodes boot and the iSCSI initiator service is started, it will automatically log in to each of the targets configured in arandom fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:racdb.crs1

may get mapped to /dev/sdb. I can actually determine the current mappings for all targets by looking at the /dev/disk/by-path directory:

[root@racnode1 ~]#

(cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdb

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdd

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdc

Using the output from the above listing, we can establish the following current mappings:

Current iSCSI Target Name to local SCSI Device Name Mappings

iSCSI Target Name SCSI Device Name

iqn.2006-01.com.openfiler:racdb.crs1 /dev/sdb

iqn.2006-01.com.openfiler:racdb.data1 /dev/sdd

iqn.2006-01.com.openfiler:racdb.fra1 /dev/sdc

This mapping, however, may change every time the Oracle RAC node is rebooted. For example, after a reboot it may be determined that the iSCSI targetiqn.2006-01.com.openfiler:racdb.crs1 gets mapped to the local SCSI device /dev/sdc. It is therefore impractical to rely on using the local

SCSI device name given there is no way to predict the iSCSI target mappings after a reboot.

What we need is a consistent device name we can reference (i.e. /dev/iscsi/crs1) that will always point to the appropriate iSCSI target through

reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory using symbolic links that

point to the actual device using a configurable set of rules. When udev receives a device event (for example, the client logging in to an iSCSI target), it

matches its configured rules against the available device attributes provided in sysfs to identify the device. Rules that match may provide additional

device information or specify a device node name and multiple symlink names and instruct udev to run additional programs (a SHELL script for example)

as part of the device event handling process.

The first step is to create a new rules file. The file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line of

name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script ( /etc/udev/scripts/iscsidev.sh) to

handle the event.

Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on both Oracle RAC nodes:

..............................................

# /etc/udev/rules.d/55-openiscsi.rules

KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

29 of 33 3/6/2012 9:52 AM

Page 30: 11gR2 RAC Openfiler Install Page1 2 3

..............................................

We now need to create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on both Oracle RACnodes where udev scripts can be stored:

[root@racnode1 ~]#

mkdir -p /etc/udev/scripts

Next, create the UNIX shell script /etc/udev/scripts/iscsidev.sh on both Oracle RAC nodes:

..............................................

#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}

HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive

if [ -z "${target_name}" ]; then

exit 1

fi

# Check if QNAP drive

check_qnap_target_name=${target_name%%:*}

if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then

target_name=`echo "${target_name%.*}"`

fi

echo "${target_name##*.}"

..............................................

After creating the UNIX SHELL script, change it to executable:

[root@racnode1 ~]#

chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI service on both Oracle RAC nodes:

[root@racnode1 ~]#

service iscsi stopLogging out of session [sid: 6, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]

Logging out of session [sid: 7, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]

Logging out of session [sid: 8, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]

Logout of [sid: 6, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful

Logout of [sid: 7, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful

Logout of [sid: 8, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful

Stopping iSCSI daemon: [

OK ]

[root@racnode1 ~]#

service iscsi startiscsid dead but pid file exists

Turning off network shutdown. Starting iSCSI daemon: [

OK ]

[

OK ]

Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]

Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successful

Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successful

Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successful

[

OK ]

Let's see if our hard work paid off:

[root@racnode1 ~]#

ls -l /dev/iscsi/*/dev/iscsi/crs1:

total 0

lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdc

/dev/iscsi/data1:

total 0

lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sde

/dev/iscsi/fra1:

total 0

lrwxrwxrwx 1 root root 9 Nov 3 18:13 part -> ../../sdd

The listing above shows that udev did the job it was suppose to do! We now have a consistent set of local device names that can be used to reference

the iSCSI targets. For example, we can safely assume that the device name /dev/iscsi/crs1/part will always reference the iSCSI target

iqn.2006-01.com.openfiler:racdb.crs1. We now have a consistent iSCSI target name to local device name mapping which is described in the

following table:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

30 of 33 3/6/2012 9:52 AM

Page 31: 11gR2 RAC Openfiler Install Page1 2 3

iSCSI Target Name to Local Device Name Mappings

iSCSI Target Name Local Device Name

iqn.2006-01.com.openfiler:racdb.crs1 /dev/iscsi/crs1/part

iqn.2006-01.com.openfiler:racdb.data1 /dev/iscsi/data1/part

iqn.2006-01.com.openfiler:racdb.fra1 /dev/iscsi/fra1/part

Create Partitions on iSCSI Volumes

We now need to create a single primary partition on each of the iSCSI volumes that spans the entire size of the volume. As mentioned earlier in thisarticle, I will be using Automatic Storage Management (ASM) to store the shared files required for Oracle Clusterware, the physical database files(data/index files, online redo log files, and control files), and the Fast Recovery Area (FRA) for the clustered database.

The Oracle Clusterware shared files (OCR and voting disk) will be stored in an ASM disk group named +CRS which will be configured for external

redundancy. The physical database files for the clustered database will be stored in an ASM disk group named +RACDB_DATA which will also be

configured for external redundancy. Finally, the Fast Recovery Area (RMAN backups and archived redo log files) will be stored in a third ASM disk groupnamed +FRA which too will be configured for external redundancy.

The following table lists the three ASM disk groups that will be created and which iSCSI volume they will contain:

Oracle Shared Drive Configuration

File Types ASM Diskgroup Name iSCSI Target (short) Name ASM Redundancy Size ASMLib Volume Name

OCR and Voting Disk +CRS crs1 External 2GB ORCL:CRSVOL1

Oracle Database Files +RACDB_DATA data1 External 32GB ORCL:DATAVOL1

Oracle Fast Recovery Area +FRA fra1 External 32GB ORCL:FRAVOL1

As shown in the table above, we will need to create a single Linux primary partition on each of the three iSCSI volumes. The fdisk command is used in

Linux for creating (and removing) partitions. For each of the three iSCSI volumes, you can use the default values when creating the primary partition asthe default action is to use the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (orSun, SGI or OSF disklabel).

In this example, I will be running the fdisk command from racnode1 to create a single primary partition on each iSCSI target using the local device

names created by udev in the previous section:

/dev/iscsi/crs1/part

/dev/iscsi/data1/part

/dev/iscsi/fra1/part

Note: Creating the single partition on each of the iSCSI volumes must only be run from one of the nodes in the Oracle RAC cluster! (i.e. racnode1)

# ---------------------------------------

[root@racnode1 ~]#

fdisk /dev/iscsi/crs1/partCommand (m for help):

nCommand action

e extended

p primary partition (1-4)

p

Partition number (1-4):

1First cylinder (1-1012, default 1):

1Last cylinder or +size or +sizeM or +sizeK (1-1012, default 1012):

1012

Command (m for help):

p

Disk /dev/iscsi/crs1/part: 2315 MB, 2315255808 bytes

72 heads, 62 sectors/track, 1012 cylinders

Units = cylinders of 4464 * 512 = 2285568 bytes

Device Boot Start End Blocks Id System

/dev/iscsi/crs1/part1 1 1012 2258753 83 Linux

Command (m for help):

wThe partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

# ---------------------------------------

[root@racnode1 ~]#

fdisk /dev/iscsi/data1/partCommand (m for help):

nCommand action

e extended

p primary partition (1-4)

pPartition number (1-4):

1

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

31 of 33 3/6/2012 9:52 AM

Page 32: 11gR2 RAC Openfiler Install Page1 2 3

First cylinder (1-33888, default 1):

1Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888):

33888

Command (m for help):

p

Disk /dev/iscsi/data1/part: 35.5 GB, 35534143488 bytes

64 heads, 32 sectors/track, 33888 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/iscsi/data1/part1 1 33888 34701296 83 Linux

Command (m for help):

wThe partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

# ---------------------------------------

[root@racnode1 ~]#

fdisk /dev/iscsi/fra1/partCommand (m for help):

nCommand action

e extended

p primary partition (1-4)

p

Partition number (1-4):

1First cylinder (1-33888, default 1):

1Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888):

33888

Command (m for help):

p

Disk /dev/iscsi/fra1/part: 35.5 GB, 35534143488 bytes

64 heads, 32 sectors/track, 33888 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/iscsi/fra1/part1 1 33888 34701296 83 Linux

Command (m for help):

wThe partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Verify New Partitions

After creating all required partitions from racnode1, you should now inform the kernel of the partition changes using the following command as the "

root" user account from all remaining nodes in the Oracle RAC cluster ( racnode2). Note that the mapping of iSCSI target names discovered from

Openfiler and the local SCSI device name will be different on both Oracle RAC nodes. This is not a concern and will not cause any problems since wewill not be using the local SCSI device names but rather the local device names created by udev in the previous section.

From racnode2, run the following commands:

[root@racnode2 ~]#

partprobe

[root@racnode2 ~]#

fdisk -l

Disk /dev/sda: 160.0 GB, 160000000000 bytes

255 heads, 63 sectors/track, 19452 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 13 104391 83 Linux

/dev/sda2 14 19452 156143767+ 8e Linux LVM

Disk /dev/sdb: 35.5 GB, 35534143488 bytes

64 heads, 32 sectors/track, 33888 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/sdb1 1 33888 34701296 83 Linux

Disk /dev/sdc: 35.5 GB, 35534143488 bytes

64 heads, 32 sectors/track, 33888 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/sdc1 1 33888 34701296 83 Linux

Disk /dev/sdd: 2315 MB, 2315255808 bytes

72 heads, 62 sectors/track, 1012 cylinders

Units = cylinders of 4464 * 512 = 2285568 bytes

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

32 of 33 3/6/2012 9:52 AM

Page 33: 11gR2 RAC Openfiler Install Page1 2 3

Device Boot Start End Blocks Id System

/dev/sdd1 1 1012 2258753 83 Linux

As a final step you should run the following command on both Oracle RAC nodes to verify that udev created the new symbolic links for each new

partition:

[root@racnode2 ~]#

(cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdd

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdd1

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdc

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdc1

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdb

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sdb1

The listing above shows that udev did indeed create new device names for each of the new partitions. We will be using these new device names when

configuring the volumes for ASMlib later in this guide:

/dev/iscsi/crs1/part1

/dev/iscsi/data1/part1

/dev/iscsi/fra1/part1

Page 1Page 2Page 3

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677...

33 of 33 3/6/2012 9:52 AM

Page 34: 11gR2 RAC Openfiler Install Page1 2 3

Build Your Own Oracle RAC 11g Cluster on Oracle Linux and iSCSI (Continued)

The information in this guide is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is foreducational purposes only.

13. Create Job Role Separation Operating System Privileges Groups, Users, and Directories

Perform the following user, group, directory configuration, and setting shell limit tasks for the grid and oracle users on both Oracle RAC nodes in

the cluster.

This section provides the instructions on how to create the operating system users and groups to install all Oracle software using a Job Role Separation

configuration. The commands in this section should be performed on both Oracle RAC nodes as root to create these groups, users, and directories.

Note that the group and user IDs must be identical on both Oracle RAC nodes in the cluster. Check to make sure that the group and user IDs you want touse are available on each cluster member node, and confirm that the primary group for each grid infrastructure for a cluster installation owner has thesame name and group ID which for the purpose of this guide is oinstall (GID 1000).

A Job Role Separation privileges configuration of Oracle is a configuration with operating system groups and users that divide administrative accessprivileges to the Oracle grid infrastructure installation from other administrative privileges users and groups associated with other Oracle installations(e.g. the Oracle Database software). Administrative privileges access is granted by membership in separate operating system groups, and installationprivileges are granted by using different installation owners for each Oracle installation.

One OS user will be created to own each Oracle software product " grid" for the Oracle grid infrastructure owner and " oracle" for the Oracle RAC

software. Throughout this article, a user created to own the Oracle grid infrastructure binaries is called the grid user. This user will own both the Oracle

Clusterware and Oracle Automatic Storage Management binaries. The user created to own the Oracle Database binaries (Oracle RAC) will be called theoracle user. Both Oracle software owners must have the Oracle Inventory group ( oinstall) as their primary group, so that each Oracle software

installation owner can write to the central inventory (oraInventory), and so that OCR and Oracle Clusterware resource permissions are set correctly. TheOracle RAC software owner must also have the OSDBA group and the optional OSOPER group as secondary groups.

This type of configuration is optional but highly recommend by Oracle for organizations that need to restrict user access to Oracle software byresponsibility areas for different administrator users. For example, a small organization could simply allocate operating system user privileges so that youcan use one administrative user and one group for operating system authentication for all system privileges on the storage and database tiers. With thistype of configuration, you can designate the oracle user to be the sole installation owner for all Oracle software (Grid infrastructure and the Oracle

database software), and designate oinstall to be the single group whose members are granted all system privileges for Oracle Clusterware,

Automatic Storage Management, and all Oracle Databases on the servers, and all privileges as installation owners. Other organizations, however, havespecialized system roles who will be responsible for installing the Oracle software such as system administrators, network administrators, or storageadministrators. These different administrator users can configure a system in preparation for an Oracle grid infrastructure for a cluster installation, andcomplete all configuration tasks that require operating system root privileges. When grid infrastructure installation and configuration is completed

successfully, a system administrator should only need to provide configuration information and to grant access to the database administrator to runscripts as root during an Oracle RAC installation.

The following O/S groups will be created:

Description OS Group Name OS Users Assigned to this Group Oracle Privilege Oracle Group Name

Oracle Inventory and Software Owner oinstall grid, oracle

Oracle Automatic Storage Management Group asmadmin grid SYSASM OSASM

ASM Database Administrator Group asmdba grid, oracle SYSDBA for ASM OSDBA for ASM

ASM Operator Group asmoper grid SYSOPER for ASM OSOPER for ASM

Database Administrator dba oracle SYSDBA OSDBA

Database Operator oper oracle SYSOPER OSOPER

Oracle Inventory Group (typically oinstall)

Members of the OINSTALL group are considered the "owners" of the Oracle software and are granted privileges to write to the Oracle central

inventory (oraInventory). When you install Oracle software on a Linux system for the first time, OUI creates the /etc/oraInst.loc file. This file

identifies the name of the Oracle Inventory group (by default, oinstall), and the path of the Oracle Central Inventory directory.

By default, if an oraInventory group does not exist, then the installer lists the primary group of the installation owner for the grid infrastructure for acluster as the oraInventory group. Ensure that this group is available as a primary group for all planned Oracle software installation owners. Forthe purpose of this guide, the grid and oracle installation owners must be configured with oinstall as their primary group.

The Oracle Automatic Storage Management Group (typically asmadmin)

This is a required group. Create this group as a separate group if you want to have separate administration privilege groups for Oracle ASM andOracle Database administrators. In Oracle documentation, the operating system group whose members are granted privileges is called theOSASM group, and in code examples, where there is a group specifically created to grant this privilege, it is referred to as asmadmin.

Members of the OSASM group can use SQL to connect to an Oracle ASM instance as SYSASM using operating system authentication. The

SYSASM privilege that was introduced in Oracle ASM 11g release 1 (11.1) is now fully separated from the SYSDBA privilege in Oracle ASM 11g

Release 2 (11.2). SYSASM privileges no longer provide access privileges on an RDBMS instance. Providing system privileges for the storage tier

using the SYSASM privilege instead of the SYSDBA privilege provides a clearer division of responsibility between ASM administration and

database administration, and helps to prevent different databases using the same storage from accidentally overwriting each others files. TheSYSASM privileges permit mounting and dismounting disk groups, and other storage administration tasks.

The ASM Database Administrator group (OSDBA for ASM, typically asmdba)

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

1 of 24 3/6/2012 9:54 AM

Page 35: 11gR2 RAC Openfiler Install Page1 2 3

Members of the ASM Database Administrator group (OSDBA for ASM) is a subset of the SYSASM privileges and are granted read and write

access to files managed by Oracle ASM. The grid infrastructure installation owner ( grid) and all Oracle Database software owners ( oracle)

must be a member of this group, and all users with OSDBA membership on databases that have access to the files managed by Oracle ASM must

be members of the OSDBA group for ASM.

Members of the ASM Operator Group (OSOPER for ASM, typically asmoper)

This is an optional group. Create this group if you want a separate group of operating system users to have a limited set of Oracle ASM instanceadministrative privileges (the SYSOPER for ASM privilege), including starting up and stopping the Oracle ASM instance. By default, members ofthe OSASM group also have all privileges granted by the SYSOPER for ASM privilege.

To use the ASM Operator group to create an ASM administrator group with fewer privileges than the default asmadmin group, then you must

choose the Advanced installation type to install the Grid infrastructure software. In this case, OUI prompts you to specify the name of this group.In this guide, this group is asmoper.

If you want to have an OSOPER for ASM group, then the grid infrastructure for a cluster software owner ( grid) must be a member of this group.

Database Administrator (OSDBA, typically dba)

Members of the OSDBA group can use SQL to connect to an Oracle instance as SYSDBA using operating system authentication. Members of this

group can perform critical database administration tasks, such as creating the database and instance startup and shutdown. The default name forthis group is dba. The SYSDBA system privilege allows access to a database instance even when the database is not open. Control of this

privilege is totally outside of the database itself.

The SYSDBA system privilege should not be confused with the database role DBA. The DBA role does not include the SYSDBA or SYSOPER

system privileges.

Database Operator (OSOPER, typically oper)

Members of the OSOPER group can use SQL to connect to an Oracle instance as SYSOPER using operating system authentication. Members of

this optional group have a limited set of database administrative privileges such as managing and running backups. The default name for thisgroup is oper. The SYSOPER system privilege allows access to a database instance even when the database is not open. Control of this

privilege is totally outside of the database itself. To use this group, choose the Advanced installation type to install the Oracle database software.

Create Groups and User for Grid Infrastructure

Lets start this section by creating the recommended OS groups and user for Grid Infrastructure on both Oracle RAC nodes:

[root@racnode1 ~]#

groupadd -g 1000 oinstall[root@racnode1 ~]#

groupadd -g 1200 asmadmin[root@racnode1 ~]#

groupadd -g 1201 asmdba[root@racnode1 ~]#

groupadd -g 1202 asmoper[root@racnode1 ~]#

useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid

[root@racnode1 ~]#

id griduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

Set the password for the grid account:

[root@racnode1 ~]#

passwd gridChanging password for user grid.

New UNIX password:

xxxxxxxxxxxRetype new UNIX password:

xxxxxxxxxxx

passwd: all authentication tokens updated successfully.

Create Login Script for the grid User Account

Log in to both Oracle RAC nodes as the grid user account and create the following login script ( .bash_profile):

Note: When setting the Oracle environment variables for each Oracle RAC node, make certain to assign each RAC node a unique Oracle SID. For thisexample, I used:

racnode1 : ORACLE_SID=+ASM1

racnode2 : ORACLE_SID=+ASM2

[root@racnode1 ~]#

su - grid

# ---------------------------------------------------

# .bash_profile

# ---------------------------------------------------

# OS User: grid

# Application: Oracle Grid Infrastructure

# Version: Oracle 11g release 2

# ---------------------------------------------------

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

2 of 24 3/6/2012 9:54 AM

Page 36: 11gR2 RAC Openfiler Install Page1 2 3

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

alias ls="ls -FA"

# ---------------------------------------------------

# ORACLE_SID

# ---------------------------------------------------

# Specifies the Oracle system identifier (SID)

# for the Automatic Storage Management (ASM)instance

# running on this node.

#

Each RAC node must have a unique ORACLE_SID.

#

(i.e. +ASM1, +ASM2,...)

# ---------------------------------------------------

ORACLE_SID=+ASM1; export ORACLE_SID

# ---------------------------------------------------

# JAVA_HOME

# ---------------------------------------------------

# Specifies the directory of the Java SDK and Runtime

# Environment.

# ---------------------------------------------------

JAVA_HOME=/usr/local/java; export JAVA_HOME

# ---------------------------------------------------

# ORACLE_BASE

# ---------------------------------------------------

# Specifies the base of the Oracle directory structure

# for Optimal Flexible Architecture (OFA) compliant

# installations. The Oracle base directory for the

# grid installation owner is the location where

# diagnostic and administrative logs, and other logs

# associated with Oracle ASM and Oracle Clusterware

# are stored.

# ---------------------------------------------------

ORACLE_BASE=/u01/app/grid; export ORACLE_BASE

# ---------------------------------------------------

# ORACLE_HOME

# ---------------------------------------------------

# Specifies the directory containing the Oracle

# Grid Infrastructure software. For grid

# infrastructure for a cluster installations, the Grid

# home must not be placed under one of the Oracle base

# directories, or under Oracle home directories of

# Oracle Database installation owners, or in the home

# directory of an installation owner. During

# installation, ownership of the path to the Grid

# home is changed to root. This change causes

# permission errors for other installations.

# ---------------------------------------------------

ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME

# ---------------------------------------------------

# ORACLE_PATH

# ---------------------------------------------------

# Specifies the search path for files used by Oracle

# applications such as SQL*Plus. If the full path to

# the file is not specified, or if the file is not

# in the current directory, the Oracle application

# uses ORACLE_PATH to locate the file.

# This variable is used by SQL*Plus, Forms and Menu.

# ---------------------------------------------------

ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH

# ---------------------------------------------------

# SQLPATH

# ---------------------------------------------------

# Specifies the directory or list of directories that

# SQL*Plus searches for a login.sql file.

# ---------------------------------------------------

# SQLPATH=/u01/app/common/oracle/sql; export SQLPATH

# ---------------------------------------------------

# ORACLE_TERM

# ---------------------------------------------------

# Defines a terminal definition. If not set, it

# defaults to the value of your TERM environment

# variable. Used by all character mode products.

# ---------------------------------------------------

ORACLE_TERM=xterm; export ORACLE_TERM

# ---------------------------------------------------

# NLS_DATE_FORMAT

# ---------------------------------------------------

# Specifies the default date format to use with the

# TO_CHAR and TO_DATE functions. The default value of

# this parameter is determined by NLS_TERRITORY. The

# value of this parameter can be any valid date

# format mask, and the value must be surrounded by

# double quotation marks. For example:

#

# NLS_DATE_FORMAT = "MM/DD/YYYY"

#

# ---------------------------------------------------

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

3 of 24 3/6/2012 9:54 AM

Page 37: 11gR2 RAC Openfiler Install Page1 2 3

NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

# ---------------------------------------------------

# TNS_ADMIN

# ---------------------------------------------------

# Specifies the directory containing the Oracle Net

# Services configuration files like listener.ora,

# tnsnames.ora, and sqlnet.ora.

# ---------------------------------------------------

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

# ---------------------------------------------------

# ORA_NLS11

# ---------------------------------------------------

# Specifies the directory where the language,

# territory, character set, and linguistic definition

# files are stored.

# ---------------------------------------------------

ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

# ---------------------------------------------------

# PATH

# ---------------------------------------------------

# Used by the shell to locate executable programs;

# must include the $ORACLE_HOME/bin directory.

# ---------------------------------------------------

PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin

PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin

PATH=${PATH}:/u01/app/common/oracle/bin

export PATH

# ---------------------------------------------------

# LD_LIBRARY_PATH

# ---------------------------------------------------

# Specifies the list of directories that the shared

# library loader searches to locate shared object

# libraries at runtime.

# ---------------------------------------------------

LD_LIBRARY_PATH=$ORACLE_HOME/lib

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

export LD_LIBRARY_PATH

# ---------------------------------------------------

# CLASSPATH

# ---------------------------------------------------

# Specifies the directory or list of directories that

# contain compiled Java classes.

# ---------------------------------------------------

CLASSPATH=$ORACLE_HOME/JRE

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export CLASSPATH

# ---------------------------------------------------

# THREADS_FLAG

# ---------------------------------------------------

# All the tools in the JDK use green threads as a

# default. To specify that native threads should be

# used, set the THREADS_FLAG environment variable to

# "native". You can revert to the use of green

# threads by setting THREADS_FLAG to the value

# "green".

# ---------------------------------------------------

THREADS_FLAG=native; export THREADS_FLAG

# ---------------------------------------------------

# TEMP, TMP, and TMPDIR

# ---------------------------------------------------

# Specify the default directories for temporary

# files; if set, tools that create temporary files

# create them in one of these directories.

# ---------------------------------------------------

export TEMP=/tmp

export TMPDIR=/tmp

# ---------------------------------------------------

# UMASK

# ---------------------------------------------------

# Set the default file mode creation mask

# (umask) to 022 to ensure that the user performing

# the Oracle software installation creates files

# with 644 permissions.

# ---------------------------------------------------

umask 022

Create Groups and User for Oracle Database Software

Next, create the the recommended OS groups and user for the Oracle database software on both Oracle RAC nodes:

[root@racnode1 ~]#

groupadd -g 1300 dba[root@racnode1 ~]#

groupadd -g 1301 oper[root@racnode1 ~]#

useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle

[root@racnode1 ~]#

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

4 of 24 3/6/2012 9:54 AM

Page 38: 11gR2 RAC Openfiler Install Page1 2 3

id oracleuid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

Set the password for the oracle account:

[root@racnode1 ~]#

passwd oracleChanging password for user oracle.

New UNIX password:

xxxxxxxxxxxRetype new UNIX password:

xxxxxxxxxxxpasswd: all authentication tokens updated successfully.

Create Login Script for the oracle User Account

Log in to both Oracle RAC nodes as the oracle user account and create the following login script ( .bash_profile):

Note: When setting the Oracle environment variables for each Oracle RAC node, make certain to assign each RAC node a unique Oracle SID. For thisexample, I used:

racnode1 : ORACLE_SID=racdb1

racnode2 : ORACLE_SID=racdb2

[root@racnode1 ~]#

su - oracle

# ---------------------------------------------------

# .bash_profile

# ---------------------------------------------------

# OS User: oracle

# Application: Oracle Database Software Owner

# Version: Oracle 11g release 2

# ---------------------------------------------------

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

alias ls="ls -FA"

# ---------------------------------------------------

# ORACLE_SID

# ---------------------------------------------------

# Specifies the Oracle system identifier (SID) for

# the Oracle instance running on this node.

#

Each RAC node must have a unique ORACLE_SID.

#

(i.e. racdb1, racdb2,...)

# ---------------------------------------------------

ORACLE_SID=racdb1; export ORACLE_SID

# ---------------------------------------------------

# ORACLE_UNQNAME

# ---------------------------------------------------

# In previous releases of Oracle Database, you were

# required to set environment variables for

# ORACLE_HOME and ORACLE_SID to start, stop, and

# check the status of Enterprise Manager. With

# Oracle Database 11g release 2 (11.2) and later, you

# need to set the environment variables ORACLE_HOME

# and ORACLE_UNQNAME to use Enterprise Manager.

# Set ORACLE_UNQNAME equal to the database unique

# name.

# ---------------------------------------------------

ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME

# ---------------------------------------------------

# JAVA_HOME

# ---------------------------------------------------

# Specifies the directory of the Java SDK and Runtime

# Environment.

# ---------------------------------------------------

JAVA_HOME=/usr/local/java; export JAVA_HOME

# ---------------------------------------------------

# ORACLE_BASE

# ---------------------------------------------------

# Specifies the base of the Oracle directory structure

# for Optimal Flexible Architecture (OFA) compliant

# database software installations.

# ---------------------------------------------------

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

# ---------------------------------------------------

# ORACLE_HOME

# ---------------------------------------------------

# Specifies the directory containing the Oracle

# Database software.

# ---------------------------------------------------

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

5 of 24 3/6/2012 9:54 AM

Page 39: 11gR2 RAC Openfiler Install Page1 2 3

# ---------------------------------------------------

# ORACLE_PATH

# ---------------------------------------------------

# Specifies the search path for files used by Oracle

# applications such as SQL*Plus. If the full path to

# the file is not specified, or if the file is not

# in the current directory, the Oracle application

# uses ORACLE_PATH to locate the file.

# This variable is used by SQL*Plus, Forms and Menu.

# ---------------------------------------------------

ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH

# ---------------------------------------------------

# SQLPATH

# ---------------------------------------------------

# Specifies the directory or list of directories that

# SQL*Plus searches for a login.sql file.

# ---------------------------------------------------

# SQLPATH=/u01/app/common/oracle/sql; export SQLPATH

# ---------------------------------------------------

# ORACLE_TERM

# ---------------------------------------------------

# Defines a terminal definition. If not set, it

# defaults to the value of your TERM environment

# variable. Used by all character mode products.

# ---------------------------------------------------

ORACLE_TERM=xterm; export ORACLE_TERM

# ---------------------------------------------------

# NLS_DATE_FORMAT

# ---------------------------------------------------

# Specifies the default date format to use with the

# TO_CHAR and TO_DATE functions. The default value of

# this parameter is determined by NLS_TERRITORY. The

# value of this parameter can be any valid date

# format mask, and the value must be surrounded by

# double quotation marks. For example:

#

# NLS_DATE_FORMAT = "MM/DD/YYYY"

#

# ---------------------------------------------------

NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

# ---------------------------------------------------

# TNS_ADMIN

# ---------------------------------------------------

# Specifies the directory containing the Oracle Net

# Services configuration files like listener.ora,

# tnsnames.ora, and sqlnet.ora.

# ---------------------------------------------------

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

# ---------------------------------------------------

# ORA_NLS11

# ---------------------------------------------------

# Specifies the directory where the language,

# territory, character set, and linguistic definition

# files are stored.

# ---------------------------------------------------

ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

# ---------------------------------------------------

# PATH

# ---------------------------------------------------

# Used by the shell to locate executable programs;

# must include the $ORACLE_HOME/bin directory.

# ---------------------------------------------------

PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin

PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin

PATH=${PATH}:/u01/app/common/oracle/bin

export PATH

# ---------------------------------------------------

# LD_LIBRARY_PATH

# ---------------------------------------------------

# Specifies the list of directories that the shared

# library loader searches to locate shared object

# libraries at runtime.

# ---------------------------------------------------

LD_LIBRARY_PATH=$ORACLE_HOME/lib

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

export LD_LIBRARY_PATH

# ---------------------------------------------------

# CLASSPATH

# ---------------------------------------------------

# Specifies the directory or list of directories that

# contain compiled Java classes.

# ---------------------------------------------------

CLASSPATH=$ORACLE_HOME/JRE

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export CLASSPATH

# ---------------------------------------------------

# THREADS_FLAG

# ---------------------------------------------------

# All the tools in the JDK use green threads as a

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

6 of 24 3/6/2012 9:54 AM

Page 40: 11gR2 RAC Openfiler Install Page1 2 3

# default. To specify that native threads should be

# used, set the THREADS_FLAG environment variable to

# "native". You can revert to the use of green

# threads by setting THREADS_FLAG to the value

# "green".

# ---------------------------------------------------

THREADS_FLAG=native; export THREADS_FLAG

# ---------------------------------------------------

# TEMP, TMP, and TMPDIR

# ---------------------------------------------------

# Specify the default directories for temporary

# files; if set, tools that create temporary files

# create them in one of these directories.

# ---------------------------------------------------

export TEMP=/tmp

export TMPDIR=/tmp

# ---------------------------------------------------

# UMASK

# ---------------------------------------------------

# Set the default file mode creation mask

# (umask) to 022 to ensure that the user performing

# the Oracle software installation creates files

# with 644 permissions.

# ---------------------------------------------------

umask 022

Verify That the User nobody Exists

Before installing the software, complete the following procedure to verify that the user nobody exists on both Oracle RAC nodes:

To determine if the user exists, enter the following command:

#

id nobodyuid=99(nobody) gid=99(nobody) groups=99(nobody)

If this command displays information about the nobody user, then you do not have to create that user.

1.

If the user nobody does not exist, then enter the following command to create it:

#

/usr/sbin/useradd nobody

2.

Repeat this procedure on all the other Oracle RAC nodes in the cluster.3.

Create the Oracle Base Directory Path

The final step is to configure an Oracle base path compliant with an Optimal Flexible Architecture (OFA) structure and correct permissions. This will needto be performed on both Oracle RAC nodes in the cluster as root.

This guide assumes that the /u01 directory is being created in the root file system. Please note that this is being done for the sake of brevity and is not

recommended as a general practice. Normally, the /u01 directory would be provisioned as a separate file system with either hardware or software

mirroring configured.

[root@racnode1 ~]#

mkdir -p /u01/app/grid[root@racnode1 ~]#

mkdir -p /u01/app/11.2.0/grid

[root@racnode1 ~]#

chown -R grid:oinstall /u01[root@racnode1 ~]#

mkdir -p /u01/app/oracle[root@racnode1 ~]#

chown oracle:oinstall /u01/app/oracle[root@racnode1 ~]#

chmod -R 775 /u01

At the end of this section, you should have the following on both Oracle RAC nodes:

An Oracle central inventory group, or oraInventory group ( oinstall), whose members that have the central inventory group as their primary

group are granted permissions to write to the oraInventory directory.

A separate OSASM group ( asmadmin), whose members are granted the SYSASM privilege to administer Oracle Clusterware and Oracle ASM.

A separate OSDBA for ASM group ( asmdba), whose members include grid and oracle, and who are granted access to Oracle ASM.

A separate OSOPER for ASM group ( asmoper), whose members include grid, and who are granted limited Oracle ASM administrator

privileges, including the permissions to start and stop the Oracle ASM instance.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

7 of 24 3/6/2012 9:54 AM

Page 41: 11gR2 RAC Openfiler Install Page1 2 3

An Oracle grid installation for a cluster owner ( grid), with the oraInventory group as its primary group, and with the OSASM ( asmadmin),

OSDBA for ASM ( asmdba) and OSOPER for ASM ( asmoper) groups as secondary groups.

A separate OSDBA group ( dba), whose members are granted the SYSDBA privilege to administer the Oracle Database.

A separate OSOPER group ( oper), whose members include oracle, and who are granted limited Oracle database administrator privileges.

An Oracle Database software owner ( oracle), with the oraInventory group as its primary group, and with the OSDBA ( dba), OSOPER ( oper),

and the OSDBA for ASM group (asmdba) as their secondary groups.

An OFA-compliant mount point /u01 owned by grid:oinstall before installation.

An Oracle base for the grid /u01/app/grid owned by grid:oinstall with 775 permissions, and changed during the installation process to

755 permissions. The grid installation owner Oracle base directory is the location where Oracle ASM diagnostic and administrative log files areplaced.

A Grid home /u01/app/11.2.0/grid owned by grid:oinstall with 775 ( drwxdrwxr-x) permissions. These permissions are required

for installation, and are changed during the installation process to root:oinstall with 755 permissions ( drwxr-xr-x).

During installation, OUI creates the Oracle Inventory directory in the path /u01/app/oraInventory. This path remains owned by

grid:oinstall, to enable other Oracle software owners to write to the central inventory.

An Oracle base /u01/app/oracle owned by oracle:oinstall with 775 permissions.

Set Resource Limits for the Oracle Software Installation Users

To improve the performance of the software on Linux systems, you must increase the following resource limits for the Oracle software owner users (grid, oracle):

Shell Limit Item in limits.conf Hard Limit

Maximum number of open file descriptors nofile 65536

Maximum number of processes available to a single user nproc 16384

Maximum size of the stack segment of the process stack 10240

To make these changes, run the following as root:

On each Oracle RAC node, add the following lines to the /etc/security/limits.conf file (the following example shows the software

account owners oracle and grid):

[root@racnode1 ~]#

cat >> /etc/security/limits.conf <<EOF grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid har

1.

2.

On each Oracle RAC node, add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

[root@racnode1 ~]#

cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF

3.

4.

Depending on your shell environment, make the following changes to the default shell startup file, to change ulimit setting for all Oracleinstallation owners (note that these examples show the users oracle and grid):

For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file by running the following command:

[root@racnode1 ~]#

cat >> /etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bi

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file by running the following command:

[root@racnode1 ~]#

cat >> /etc/csh.login <<EOF if ( \$USER == "oracle" || \$USER == "grid" ) then limit maxproc 16384

5.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

8 of 24 3/6/2012 9:54 AM

Page 42: 11gR2 RAC Openfiler Install Page1 2 3

14. Logging In to a Remote System Using X Terminal

This guide requires access to the console of all machines (Oracle RAC nodes and Openfiler) in order to install the operating system and perform severalof the configuration tasks. When managing a very small number of servers, it might make sense to connect each server with its own monitor, keyboard,and mouse in order to access its console. However, as the number of servers to manage increases, this solution becomes unfeasible. A more practicalsolution would be to configure a dedicated computer which would include a single monitor, keyboard, and mouse that would have direct access to theconsole of each machine. This solution is made possible using a Keyboard, Video, Mouse Switch better known as a KVM Switch.

After installing the Linux operating system, there are several applications which are needed to install and configure Oracle RAC which use a GraphicalUser Interface (GUI) and require the use of an X11 display server. The most notable of these GUI applications (or better known as an X application) is theOracle Universal Installer (OUI) although others like the Virtual IP Configuration Assistant (VIPCA) also require use of an X11 display server.

Given the fact that I created this article on a system that makes use of a KVM Switch, I am able to toggle to each node and rely on the native X11 displayserver for Linux in order to display X applications.

If you are not logged directly on to the graphical console of a node but rather you are using a remote client like SSH, PuTTY, or Telnet to connect to thenode, any X application will require an X11 display server installed on the client. For example, if you are making a terminal remote connection toracnode1 from a Windows workstation, you would need to install an X11 display server on that Windows client ( Xming for example). If you intend to

install the Oracle grid infrastructure and Oracle RAC software from a Windows workstation or other system with an X11 display server installed, thenperform the following actions:

Start the X11 display server software on the client workstation.1.

2.

Configure the security settings of the X server software to permit remote hosts to display X applications on the local system.3.

4.

From the client workstation, log in to the server where you want to install the software as the Oracle grid infrastructure for a cluster softwareowner ( grid) or the Oracle RAC software ( oracle).

5.

6.

As the software owner ( grid, oracle), set the DISPLAY environment:

[root@racnode1 ~]#

su - grid

[grid@racnode1 ~]$

DISPLAY=<your local workstation>:0.0[grid@racnode1 ~]$

export DISPLAY

[grid@racnode1 ~]$ # TEST X CONFIGURATION BY RUNNING xterm

[grid@racnode1 ~]$

xterm &

Figure 16: Test X11 Display Server on Windows; Run xterm from Node 1 (racnode1)

7.

15. Configure the Linux Servers for Oracle

Perform the following configuration procedures on both Oracle RAC nodes in the cluster.

The kernel parameters discussed in this section will need to be defined on both Oracle RAC nodes in the cluster every time the machine is booted. Thissection provides information about setting those kernel parameters required for Oracle. Instructions for placing them in a startup script (

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

9 of 24 3/6/2012 9:54 AM

Page 43: 11gR2 RAC Openfiler Install Page1 2 3

/etc/sysctl.conf) is included in Section 17 ("All Startup Commands for Both Oracle RAC Nodes").

Overview

This section focuses on configuring both Oracle RAC Linux servers - getting each one prepared for the Oracle 11g release 2 grid infrastructure and

Oracle RAC 11g release 2 installations on the Oracle Enterprise Linux 5 platform. This includes verifying enough memory and swap space, setting

shared memory and semaphores, setting the maximum number of file handles, setting the IP local port range, and finally how to activate all kernelparameters for the system.

There are several different ways to configure (set) these parameters. For the purpose of this article, I will be making all changes permanent (throughreboots) by placing all values in the /etc/sysctl.conf file.

Memory and Swap Space Considerations

The minimum required RAM on RHEL/OEL is 1.5 GB for grid infrastructure for a cluster, or 2.5 GB for grid infrastructure for a cluster and Oracle RAC. Inthis guide, each Oracle RAC node will be hosting Oracle grid infrastructure and Oracle RAC and will therefore require at least 2.5 GB in each server.Each of the Oracle RAC nodes used in this article are equipped with 4 GB of physical RAM.

The minimum required swap space is 1.5 GB. Oracle recommends that you set swap space to 1.5 times the amount of RAM for systems with 2 GB ofRAM or less. For systems with 2 GB to 16 GB RAM, use swap space equal to RAM. For systems with more than 16 GB RAM, use 16 GB of RAM forswap space.

To check the amount of memory you have, type:

[root@racnode1 ~]#

cat /proc/meminfo | grep MemTotalMemTotal:

4038564 kB

To check the amount of swap you have allocated, type:

[root@racnode1 ~]#

cat /proc/meminfo | grep SwapTotal

SwapTotal:

6094840 kB

If you have less than 4GB of memory (between your RAM and SWAP), you can add temporary swap space by creating a temporary swap file.This way you do not have to use a raw device or even more drastic, rebuild your system.

As root, make a file that will act as additional swap space, let's say about 500MB:

# dd if=/dev/zero of=tempswap bs=1k count=500000

Now we should change the file permissions:

# chmod 600 tempswap

Finally we format the "partition" as swap and add it to the swap space:

# mke2fs tempswap

# mkswap tempswap

# swapon tempswap

Configure Kernel Parameters

The kernel parameters presented in this section are recommended values only as documented by Oracle. For production database systems, Oraclerecommends that you tune these values to optimize the performance of the system.

On both Oracle RAC nodes, verify that the kernel parameters described in this section are set to values greater than or equal to the recommendedvalues. Also note that when setting the four semaphore values that all four values need to be entered on one line.

Configure Kernel Parameters

Oracle Database 11g release 2 on RHEL/OEL 5 requires the kernel parameter settings shown below. The values given are minimums, so if your system

uses a larger value, do not change it.

kernel.shmmax = 4294967295

kernel.shmall = 2097152

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default=262144

net.core.rmem_max=4194304

net.core.wmem_default=262144

net.core.wmem_max=1048576

fs.aio-max-nr=1048576

RHEL/OEL 5 already comes configured with default values defined for the following kernel parameters:

kernel.shmall

kernel.shmmax

Use the default values if they are the same or larger than the required values.

This article assumes a fresh new install of Oracle Enterprise Linux 5 and as such, many of the required kernel parameters are already set (see above).This being the case, you can simply copy / paste the following to both Oracle RAC nodes while logged in as root:

[root@racnode1 ~]#

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

10 of 24 3/6/2012 9:54 AM

Page 44: 11gR2 RAC Openfiler Install Page1 2 3

cat >> /etc/sysctl.conf <<EOF # Controls the maximum number of shared memory segments system wide kernel.shmmni =

Activate All Kernel Parameters for the System

The above command persisted the required kernel parameters through reboots by inserting them in the /etc/sysctl.conf startup file. Linux allows

modification of these kernel parameters to the current system while it is up and running, so there's no need to reboot the system after making kernelparameter changes. To activate the new kernel parameter values for the currently running system, run the following as root on both Oracle RAC nodes

in the cluster:

[root@racnode1 ~]#

sysctl -pnet.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

Verify the new kernel parameter values by running the following on both Oracle RAC nodes in the cluster:

[root@racnode1 ~]#

/sbin/sysctl -a | grep shmvm.hugetlb_shm_group = 0

kernel.shmmni = 4096

kernel.shmall = 4294967296

kernel.shmmax = 68719476736

[root@racnode1 ~]#

/sbin/sysctl -a | grep semkernel.sem = 250 32000 100 128

[root@racnode1 ~]#

/sbin/sysctl -a | grep file-maxfs.file-max = 6815744

[root@racnode1 ~]#

/sbin/sysctl -a | grep ip_local_port_range

net.ipv4.ip_local_port_range = 9000 65500

[root@racnode1 ~]#

/sbin/sysctl -a | grep 'core\.[rw]mem'net.core.rmem_default = 262144

net.core.wmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_max = 1048576

16. Configure RAC Nodes for Remote Access using SSH - (Optional)

Perform the following optional procedures on both Oracle RAC nodes to manually configure passwordless SSH connectivity between the two cluster

member nodes as the "grid" and "oracle" user.

One of the best parts about this section of the document is that it is completely optional! That's not to say configuring Secure Shell (SSH) connectivitybetween the Oracle RAC nodes is not necessary. To the contrary, the Oracle Universal Installer (OUI) uses the secure shell tools ssh and scp

commands during installation to run remote commands on and copy files to the other cluster nodes. During the Oracle software installations, SSH mustbe configured so that these commands do not prompt for a password. The ability to run SSH commands without being prompted for a password issometimes referred to as user equivalence.

The reason this section of the document is optional is that the OUI interface in 11g release 2 includes a new feature that can automatically configure SSH

during the actual install phase of the Oracle software for the user account running the installation. The automatic configuration performed by OUI createspasswordless SSH connectivity between all cluster member nodes. Oracle recommends that you use the automatic procedure whenever possible.

In addition to installing the Oracle software, SSH is used after installation by configuration assistants, Oracle Enterprise Manager, OPatch, and otherfeatures that perform configuration operations from local to remote nodes.

Note: Configuring SSH with a passphrase is no longer supported for Oracle Clusterware 11g release 2 and later releases. Passwordless SSH is

required for Oracle 11g release 2 and higher.

Since this guide uses grid as the Oracle grid infrastructure software owner and oracle as the owner of the Oracle RAC software, passwordless SSH

must be configured for both user accounts.

Note: When SSH is not available, the installer attempts to use the rsh and rcp commands instead of ssh and scp. These services, however, are

disabled by default on most Linux systems. The use of RSH will not be discussed in this article.

Verify SSH Software is Installed

The supported version of SSH for Linux distributions is OpenSSH. OpenSSH should be included in the Linux distribution minimal installation. To confirmthat SSH packages are installed, run the following command on both Oracle RAC nodes:

[root@racnode1 ~]#

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

11 of 24 3/6/2012 9:54 AM

Page 45: 11gR2 RAC Openfiler Install Page1 2 3

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep sshopenssh-askpass-4.3p2-36.el5 (x86_64)

openssh-clients-4.3p2-36.el5 (x86_64)

openssh-4.3p2-36.el5 (x86_64)

openssh-server-4.3p2-36.el5 (x86_64)

If you do not see a list of SSH packages, then install those packages for your Linux distribution. For example, load CD #1 into each of the Oracle RACnodes and perform the following to install the OpenSSH packages:

[root@racnode1 ~]#

mount -r /dev/cdrom /media/cdrom

[root@racnode1 ~]#

cd /media/cdrom/Server[root@racnode1 ~]#

rpm -Uvh openssh-*[root@racnode1 ~]#

cd /[root@racnode1 ~]#

eject

Why Configure SSH User Equivalence Using the Manual Method Option?

So, if the OUI already includes a feature that automates the SSH configuration between the Oracle RAC nodes, then why provide a section on how tomanually configure passwordless SSH connectivity? In fact, for the purpose of this article, I decided to forgo manually configuring SSH connectivity infavor of Oracle's automatic methods included in the installer.

One reason to include this section on manually configuring SSH is to make mention of the fact that you must remove stty commands from the profiles of

any Oracle software installation owners, and remove other security measures that are triggered during a login, and that generate messages to theterminal. These messages, mail checks, and other displays prevent Oracle software installation owners from using the SSH configuration script that isbuilt into the Oracle Universal Installer. If they are not disabled, then SSH must be configured manually before an installation can be run. Furtherdocumentation on preventing installation errors caused by stty commands can be found later in this section.

Another reason you may decide to manually configure SSH for user equivalence is to have the ability to run the Cluster Verification Utility (CVU) prior toinstalling the Oracle software. The CVU ( runcluvfy.sh) is a valuable tool located in the Oracle Clusterware root directory that not only verifies all

prerequisites have been met before software installation, it also has the ability to generate shell script programs, called fixup scripts, to resolve manyincomplete system configuration requirements. The CVU does, however, have a prerequisite of its own and that is SSH user equivalency is configuredcorrectly for the user account running the installation. If you intend to configure SSH connectivity using the OUI, know that the CVU utility will fail beforehaving the opportunity to perform any of its critical checks:

[grid@racnode1 ~]$

/media/cdrom/grid/runcluvfy.sh stage -pre crsinst -fixup -n racnode1,racnode2 -verbosePerforming pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "racnode1"

Destination Node Reachable?

------------------------------------ ------------------------

racnode1 yes

racnode2 yes

Result: Node reachability check passed from node "racnode1"

Checking user equivalence...

Check: User equivalence for user "grid"

Node Name Comment

------------------------------------ ------------------------

racnode2

failed

racnode1

failed

Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:

User equivalence unavailable on all the specified nodes

Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Please note that it is not required to run the CVU utility before installing the Oracle software. Starting with Oracle 11g release 2, the installer detects

when minimum requirements for installation are not completed and performs the same tasks done by the CVU to generate fixup scripts to resolve

incomplete system configuration requirements.

Configure SSH Connectivity Manually on All Cluster Nodes

To reiterate, it is not required to manually configure SSH connectivity before running the OUI. The OUI in 11g release 2 provides an interface during the

install for the user account running the installation to automatically create passwordless SSH connectivity between all cluster member nodes. This is therecommend approach by Oracle and the method used in this article. The tasks below to manually configure SSH connectivity between all cluster membernodes is included for documentation purposes only. Keep in mind that this guide uses grid as the Oracle grid infrastructure software owner and

oracle as the owner of the Oracle RAC software. If you decide to manually configure SSH connectivity, it should be performed for both user accounts.

The goal in this section is to setup user equivalence for the grid and oracle OS user accounts. User equivalence enables the grid and oracle user

accounts to access all other nodes in the cluster (running commands and copying files) without the need for a password. Oracle added support in 10g

release 1 for using the SSH tool suite for setting up user equivalence. Before Oracle Database 10g, user equivalence had to be configured using remote

shell (RSH).

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

12 of 24 3/6/2012 9:54 AM

Page 46: 11gR2 RAC Openfiler Install Page1 2 3

In the examples that follow, the Oracle software owner listed is the grid user.

Checking Existing SSH Configuration on the System

To determine if SSH is installed and running, enter the following command:

[grid@racnode1 ~]$

pgrep sshd2535

19852

If SSH is running, then the response to this command is a list of process ID number(s). Run this command on both Oracle RAC nodes in the cluster toverify the SSH daemons are installed and running.

You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while DSA is the default for the SSH 2.0 protocol.With OpenSSH, you can use either RSA or DSA. The instructions that follow are for SSH1. If you have an SSH2 installation, and you cannot use SSH1,then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA.

Note: Automatic passwordless SSH configuration using the OUI creates RSA encryption keys on all nodes of the cluster.

Configuring Passwordless SSH on Cluster Nodes

To configure passwordless SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keys generated on all cluster nodemembers into an authorized keys file that is identical on each node. Note that the SSH files must be readable only by root and by the software

installation user ( grid, oracle), as SSH ignores a private key file if it is accessible by others. In the examples that follow, the DSA key is used.

You must configure passwordless SSH separately for each Oracle software installation owner that you intend to use for installation ( grid, oracle).

To configure passwordless SSH, complete the following:

Create SSH Directory, and Create SSH Keys On Each Node

Complete the following steps on each node:

Log in as the software owner (in this example, the grid user).

[root@racnode1 ~]#

su - grid

1.

2.

To ensure that you are logged in as grid, and to verify that the user ID matches the expected user ID you have assigned to the grid user, enter

the commands id and id grid. Ensure that Oracle user group and user and the user terminal window process you are using have group and

user IDs are identical. For example:

[grid@racnode1 ~]$

iduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

[grid@racnode1 ~]$

id griduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

3.

4.

If necessary, create the .ssh directory in the grid user's home directory, and set permissions on it to ensure that only the grid user has read

and write permissions:

[grid@racnode1 ~]$

mkdir ~/.ssh[grid@racnode1 ~]$

chmod 700 ~/.ssh

Note: SSH configuration will fail if the permissions are not set to 700.

5.

6.

Enter the following command to generate a DSA key pair (public and private key) for the SSH protocol. At the prompts, accept the default key filelocation and no passphrase (press [Enter]):

[grid@racnode1 ~]$

/usr/bin/ssh-keygen -t dsaGenerating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

[Enter]Enter passphrase (empty for no passphrase):

[Enter]Enter same passphrase again:

[Enter]Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

7b:e9:e8:47:29:37:ea:10:10:c6:b6:7d:d2:73:e9:03 grid@racnode1

Note: SSH with passphrase is not supported for Oracle Clusterware 11g release 2 and later releases. Passwordless SSH is required for Oracle

11g release 2 and higher.

This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file.

7.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

13 of 24 3/6/2012 9:54 AM

Page 47: 11gR2 RAC Openfiler Install Page1 2 3

Never distribute the private key to anyone not authorized to perform Oracle software installations.

8.

Repeat steps 1 through 4 for all remaining nodes that you intend to make a member of the cluster, using the DSA key ( racnode2).9.

Add All Keys to a Common authorized_keys File

Now that both Oracle RAC nodes contain a public and private key for DSA, you will need to create an authorized key file ( authorized_keys) on one

of the nodes. An authorized key file is nothing more than a single file that contains a copy of everyone's (every node's) DSA public key. Once theauthorized key file contains all of the public keys, it is then distributed to all other nodes in the cluster.

Note: The grid user's ~/.ssh/authorized_keys file on every node must contain the contents from all of the ~/.ssh/id_dsa.pub files that you

generated on all cluster nodes.

Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file. For the purpose of this article, I amusing the primary node in the cluster, racnode1:

From racnode1 (the local node) determine if the authorized key file ~/.ssh/authorized_keys already exists in the .ssh directory of the

owner's home directory. In most cases this will not exist since this article assumes you are working with a new install. If the file doesn't exist,create it now:

[grid@racnode1 ~]$

touch ~/.ssh/authorized_keys[grid@racnode1 ~]$

ls -l ~/.sshtotal 8

-rw-r--r-- 1 grid oinstall 0 Nov 12 12:34 authorized_keys

-rw------- 1 grid oinstall 668 Nov 12 09:24 id_dsa

-rw-r--r-- 1 grid oinstall 603 Nov 12 09:24 id_dsa.pub

In the .ssh directory, you should see the id_dsa.pub keys that you have created, and the blank file authorized_keys.

1.

2.

On the local node ( racnode1), use SCP (Secure Copy) or SFTP (Secure FTP) to copy the content of the ~/.ssh/id_dsa.pub public key

from both Oracle RAC nodes in the cluster to the authorized key file just created ( ~/.ssh/authorized_keys). Again, this will be done from

racnode1. You will be prompted for the grid OS user account password for both Oracle RAC nodes accessed.

The following example is being run from racnode1 and assumes a two-node cluster, with nodes racnode1 and racnode2:

[grid@racnode1 ~]$

ssh racnode1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

The authenticity of host 'racnode1 (192.168.1.151)' can't be established.

RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5.

Are you sure you want to continue connecting (yes/no)?

yesWarning: Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hosts.

grid@racnode1's password:

xxxxx

[grid@racnode1 ~]$

ssh racnode2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysThe authenticity of host 'racnode2 (192.168.1.152)' can't be established.

RSA key fingerprint is 97:ab:db:26:f6:01:20:cc:e0:63:d0:d1:73:7e:c2:0a.

Are you sure you want to continue connecting (yes/no)?

yesWarning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hosts.

grid@racnode2's password:

xxxxx

The first time you use SSH to connect to a node from a particular system, you will see a message similar to the following:

The authenticity of host 'racnode1 (192.168.1.151)' can't be established.

RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5.

Are you sure you want to continue connecting (yes/no)?

yes

Enter yes at the prompt to continue. The public hostname will then be added to the known_hosts file in the ~/.ssh directory and you will not

see this message again when you connect from this system to the same node.

3.

4.

At this point, we have the DSA public key from every node in the cluster in the authorized key file ( ~/.ssh/authorized_keys) on racnode1:

[grid@racnode1 ~]$

ls -l ~/.sshtotal 16

-rw-r--r-- 1 grid oinstall 1206 Nov 12 12:45 authorized_keys

-rw------- 1 grid oinstall 668 Nov 12 09:24 id_dsa

-rw-r--r-- 1 grid oinstall 603 Nov 12 09:24 id_dsa.pub

-rw-r--r-- 1 grid oinstall 808 Nov 12 12:45 known_hosts

We now need to copy it to the remaining nodes in the cluster. In our two-node cluster example, the only remaining node is racnode2. Use the

scp command to copy the authorized key file to all remaining nodes in the cluster:

[grid@racnode1 ~]$

scp ~/.ssh/authorized_keys racnode2:.ssh/authorized_keysgrid@racnode2's password:

xxxxx

5.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

14 of 24 3/6/2012 9:54 AM

Page 48: 11gR2 RAC Openfiler Install Page1 2 3

authorized_keys 100% 1206 1.2KB/s 00:00

6.

Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by logging into the node and running the following:

[grid@racnode1 ~]$

chmod 600 ~/.ssh/authorized_keys

7.

Enable SSH User Equivalency on Cluster Nodes

After you have copied the authorized_keys file that contains all public keys to each node in the cluster, complete the steps in this section to ensure

passwordless SSH connectivity between all cluster member nodes is configured correctly. In this example, the Oracle grid infrastructure software ownerwill be used which is named grid.

When running the test SSH commands in this section, if you see any other messages or text, apart from the date and host name, then the Oracleinstallation will fail. If any of the nodes prompt for a password or pass phrase then verify that the ~/.ssh/authorized_keys file on that node contains

the correct public keys and that you have created an Oracle software owner with identical group membership and IDs. Make any changes required toensure that only the date and host name is displayed when you enter these commands. You should ensure that any part of a login script that generatesany output, or asks any questions, is modified so it acts only when the shell is an interactive shell.

On the system where you want to run OUI from ( racnode1), log in as the grid user.

[root@racnode1 ~]#

su - grid

1.

2.

If SSH is configured correctly, you will be able to use the ssh and scp commands without being prompted for a password or pass phrase from

the terminal session:

[grid@racnode1 ~]$

ssh racnode1 "date;hostname"Fri Nov 13 09:46:56 EST 2009

racnode1

[grid@racnode1 ~]$

ssh racnode2 "date;hostname"Fri Nov 13 09:47:34 EST 2009

racnode2

3.

4.

Perform the same actions above from the remaining nodes in the Oracle RAC cluster ( racnode2) to ensure they too can access all other nodes

without being prompted for a password or pass phrase and get added to the known_hosts file:

[grid@racnode2 ~]$

ssh racnode1 "date;hostname"The authenticity of host 'racnode1 (192.168.1.151)' can't be established.

RSA key fingerprint is 2f:0d:2c:da:9f:d4:3d:2e:ea:e9:98:20:2c:b9:e8:f5.

Are you sure you want to continue connecting (yes/no)?

yesWarning: Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hosts.

Fri Nov 13 10:19:57 EST 2009

racnode1

[grid@racnode2 ~]$

ssh racnode1 "date;hostname"Fri Nov 13 10:20:58 EST 2009

racnode1

--------------------------------------------------------------------------

[grid@racnode2 ~]$

ssh racnode2 "date;hostname"

The authenticity of host 'racnode2 (192.168.1.152)' can't be established.

RSA key fingerprint is 97:ab:db:26:f6:01:20:cc:e0:63:d0:d1:73:7e:c2:0a.

Are you sure you want to continue connecting (yes/no)?

yesWarning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hosts.

Fri Nov 13 10:22:00 EST 2009

racnode2

[grid@racnode2 ~]$

ssh racnode2 "date;hostname"

Fri Nov 13 10:22:01 EST 2009

racnode2

5.

6.

The Oracle Universal Installer is a GUI interface and requires the use of an X Server. From the terminal session enabled for user equivalence (thenode you will be performing the Oracle installations from), set the environment variable DISPLAY to a valid X Windows display:

Bourne, Korn, and Bash shells:

[grid@racnode1 ~]$

DISPLAY=<Any X-Windows Host>:0[grid@racnode1 ~]$

export DISPLAY

7.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

15 of 24 3/6/2012 9:54 AM

Page 49: 11gR2 RAC Openfiler Install Page1 2 3

C shell:

[grid@racnode1 ~]$

setenv DISPLAY <Any X-Windows Host>:0

After setting the DISPLAY variable to a valid X Windows display, you should perform another test of the current terminal session to ensure that

X11 forwarding is not enabled:

[grid@racnode1 ~]$

ssh racnode1 hostnameracnode1

[grid@racnode1 ~]$

ssh racnode2 hostnameracnode2

Note: If you are using a remote client to connect to the node performing the installation, and you see a message similar to: " Warning: No

xauth data; using fake authentication data for X11 forwarding." then this means that your authorized keys file is

configured correctly; however, your SSH configuration has X11 forwarding enabled. For example:

[grid@racnode1 ~]$

export DISPLAY=melody:0[grid@racnode1 ~]$

ssh racnode2 hostname

Warning: No xauth data; using fake authentication data for X11 forwarding.

racnode2

Note that having X11 Forwarding enabled will cause the Oracle installation to fail. To correct this problem, create a user-level SSH clientconfiguration file for the oracle OS user account that disables X11 Forwarding:

Using a text editor, edit or create the file ~/.ssh/config1.

Make sure that the ForwardX11 attribute is set to no. For example, insert the following into the ~/.ssh/config file:

Host *

ForwardX11 no

2.

Preventing Installation Errors Caused by stty Commands

During an Oracle grid infrastructure or Oracle RAC software installation, OUI uses SSH to run commands and copy files to the other nodes. During theinstallation, hidden files on the system (for example, .bashrc or .cshrc) will cause makefile and other installation errors if they contain stty

commands.

To avoid this problem, you must modify these files in each Oracle installation owner user home directory to suppress all output on STDERR, as in the

following examples:

Bourne, Bash, or Korn shell:

if [ -t 0 ]; then

stty intr ^C

fi

C shell:

test -t 0

if ($status == 0) then

stty intr ^C

endif

Note: If there are hidden files that contain stty commands that are loaded by the remote shell, then OUI indicates an error and stops the installation.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

16 of 24 3/6/2012 9:54 AM

Page 50: 11gR2 RAC Openfiler Install Page1 2 3

17. All Startup Commands for Both Oracle RAC Nodes

Verify that the following startup commands are included on both of the Oracle RAC nodes in the cluster.

Up to this point, we have talked in great detail about the parameters and resources that need to be configured on both nodes in the Oracle RAC 11g

configuration. This section will review those parameters, commands, and entries from previous sections that need to occur on both Oracle RAC nodeswhen they are booted.

For each of the startup files below, entries in red should be included in each startup file.

/etc/sysctl.conf

We wanted to adjust the default and maximum send buffer size as well as the default and maximum receive buffer size for the interconnect. This file

also contains those parameters responsible for configuring shared memory, semaphores, file handles, and local IP range used by the Oracle instance.

.................................................................

# Kernel sysctl configuration file for Red Hat Linux

#

# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and

# sysctl.conf(5) for more details.

# Controls IP packet forwarding

net.ipv4.ip_forward = 0

# Controls source route verification

net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing

net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel

kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename

# Useful for debugging multi-threaded applications

kernel.core_uses_pid = 1

# Controls the use of TCP syncookies

net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes

kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue

kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes

kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages

kernel.shmall = 4294967296

# Controls the maximum number of shared memory segments system wide

kernel.shmmni = 4096

# Sets the following semaphore values:

# SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value

kernel.sem = 250 32000 100 128

# Sets the maximum number of file-handles that the Linux kernel will allocate

fs.file-max = 6815744

# Defines the local port range that is used by TCP and UDP

# traffic to choose the local port

net.ipv4.ip_local_port_range = 9000 65500

# Default setting in bytes of the socket "receive" buffer which

# may be set by using the SO_RCVBUF socket option

net.core.rmem_default=262144

# Maximum setting in bytes of the socket "receive" buffer which

# may be set by using the SO_RCVBUF socket option

net.core.rmem_max=4194304

# Default setting in bytes of the socket "send" buffer which

# may be set by using the SO_SNDBUF socket option

net.core.wmem_default=262144

# Maximum setting in bytes of the socket "send" buffer which

# may be set by using the SO_SNDBUF socket option

net.core.wmem_max=1048576

# Maximum number of allowable concurrent asynchronous I/O requests requests

fs.aio-max-nr=1048576

.................................................................

Verify that each of the required kernel parameters are configured in the /etc/sysctl.conf file. Then, ensure that each of these parameters are truly

in effect by running the following command on both Oracle RAC nodes in the cluster:

[root@racnode1 ~]#

sysctl -pnet.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

17 of 24 3/6/2012 9:54 AM

Page 51: 11gR2 RAC Openfiler Install Page1 2 3

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

/etc/hosts

All machine/IP entries for nodes in our RAC cluster.

.................................................................

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)

192.168.1.151 racnode1

192.168.1.152 racnode2

# Private Interconnect - (eth1)

192.168.2.151 racnode1-priv

192.168.2.152 racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)

192.168.1.251 racnode1-vip

192.168.1.252 racnode2-vip

# Single Client Access Name (SCAN)

192.168.1.187 racnode-cluster-scan

# Private Storage Network for Openfiler - (eth1)

192.168.1.195 openfiler1

192.168.2.195 openfiler1-priv

# Miscellaneous Nodes

192.168.1.1 router

192.168.1.105 packmule

192.168.1.106 melody

192.168.1.121 domo

192.168.1.122 switch1

192.168.1.125 oemprod

192.168.1.245 accesspoint

.................................................................

/etc/udev/rules.d/55-openiscsi.rules

Rules file to be used by udev to mount iSCSI volumes. This file contains all name=value pairs used to receive events and the call-out SHELL script to

handle the event.

.................................................................

# /etc/udev/rules.d/55-openiscsi.rules

KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

.................................................................

/etc/udev/scripts/iscsidev.sh

Call-out SHELL script that handles the events passed to it from the udev rules file (above) and used to mount iSCSI volumes.

.................................................................

#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}

HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

18 of 24 3/6/2012 9:54 AM

Page 52: 11gR2 RAC Openfiler Install Page1 2 3

if [ -z "${target_name}" ]; then

exit 1

fi

# Check if QNAP drive

check_qnap_target_name=${target_name%%:*}

if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then

target_name=`echo "${target_name%.*}"`

fi

echo "${target_name##*.}"

.................................................................

18. Install and Configure ASMLib 2.0

The installation and configuration procedures in this section should be performed on both of the Oracle RAC nodes in the cluster. Creating the ASM

disks, however, will only need to be performed on a single node within the cluster (racnode1).

In this section, we will install and configure ASMLib 2.0 which is a support library for the Automatic Storage Management (ASM) feature of the OracleDatabase. In this article, ASM will be used as the shared file system and volume manager for Oracle Clusterware files (OCR and voting disk), OracleDatabase files (data, online redo logs, control files, archived redo logs), and the Fast Recovery Area.

Automatic Storage Management simplifies database administration by eliminating the need for the DBA to directly manage potentially thousands ofOracle database files requiring only the management of groups of disks allocated to the Oracle Database. ASM is built into the Oracle kernel and can beused for both single and clustered instances of Oracle. All of the files and directories to be used for Oracle will be contained in a disk group (or for the

purpose of this article, three disk groups). ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots andmaximize performance, even with rapidly changing data usage patterns. ASMLib allows an Oracle Database using ASM more efficient and capableaccess to the disk groups it is using.

Keep in mind that ASMLib is only a support library for the ASM software. The ASM software will be installed as part of Oracle grid infrastructure later inthis guide Starting with Oracle grid infrastructure 11g release 2 (11.2), the Automatic Storage Management and Oracle Clusterware software is packaged

together in a single binary distribution and installed into a single home directory, which is referred to as the Grid Infrastructure home. The Oracle gridinfrastructure software will be owned by the user grid.

So, is ASMLib required for ASM? Not at all. In fact, there are two different methods to configure ASM on Linux:

ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. RAW devices

are not required with this method as ASMLib works with block devices.

ASM with Standard Linux I/O: This method does not make use of ASMLib. Oracle database files are created on raw character devices

managed by ASM using standard Linux I/O system calls. You will be required to create RAW devices for all disk partitions used by ASM.

In this article, I will be using the "ASM with ASMLib I/O" method. Oracle states in Metalink Note 275315.1 that " ASMLib was provided to enable ASM I/O

to Linux disks without the limitations of the standard UNIX I/O API". I plan on performing several tests in the future to identify the performance gains in

using ASMLib. Those performance metrics and testing details are out of scope of this article and therefore will not be discussed.

If you would like to learn more about Oracle ASMLib 2.0, visit http://www.oracle.com/technology/tech/linux/asmlib/

Install ASMLib 2.0 Packages

In previous editions of this article, here would be the time where you would need to download the ASMLib 2.0 software from Oracle ASMLib Downloadsfor Red Hat Enterprise Linux Server 5. This is no longer necessary since the ASMLib software is included with Oracle Enterprise Linux (with the

exception of the Userspace Library which is a separate download). The ASMLib 2.0 software stack includes the following packages:

32-bit (x86) Installations

ASMLib Kernel Driveroracleasm-x.x.x-x.el5-x.x.x-x.el5.i686.rpm - (for default kernel)oracleasm-x.x.x-x.el5xen-x.x.x-x.el5.i686.rpm - (for xen kernel)

Userspace Libraryoracleasmlib-x.x.x-x.el5.i386.rpm

Driver Support Filesoracleasm-support-x.x.x-x.el5.i386.rpm

64-bit (x86_64) Installations

ASMLib Kernel Driveroracleasm-x.x.x-x.el5-x.x.x-x.el5.x86_64.rpm - (for default kernel)oracleasm-x.x.x-x.el5xen-x.x.x-x.el5.x86_64.rpm - (for xen kernel)

Userspace Libraryoracleasmlib-x.x.x-x.el5.x86_64.rpm

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

19 of 24 3/6/2012 9:54 AM

Page 53: 11gR2 RAC Openfiler Install Page1 2 3

Driver Support Filesoracleasm-support-x.x.x-x.el5.x86_64.rpm

With Oracle Enterprise Linux 5, the ASMLib 2.0 software packages do not get installed by default. The ASMLib 2.0 kernel drivers can be found on CD #5while the Driver Support File can be found on CD #3. The Userspace Library will need to be downloaded as it is not included with Enterprise Linux. Todetermine if the Oracle ASMLib packages are installed (which in most cases, they will not be), perform the following on both Oracle RAC nodes:

[root@racnode1 ~]#

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sort

If the Oracle ASMLib 2.0 packages are not installed, load the Enterprise Linux CD #3 and then CD #5 into each of the Oracle RAC nodes and performthe following:

From Enterprise Linux 5.4 (x86_64) - [CD #3]mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh oracleasm-support-2.1.3-1.el5.x86_64.rpm

cd /

eject

From Enterprise Linux 5.4 (x86_64) - [CD #5]mount -r /dev/cdrom /media/cdrom

cd /media/cdrom/Server

rpm -Uvh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

cd /

eject

After installing the ASMLib packages, verify from both Oracle RAC nodes that the software is installed:

[root@racnode1 ~]#

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | sortoracleasm-2.6.18-164.el5-2.0.5-1.el5 (x86_64)

oracleasm-support-2.1.3-1.el5 (x86_64)

Download Oracle ASMLib Userspace Library

As mentioned in the previous section, the ASMLib 2.0 software is included with Enterprise Linux with the exception of the Userspace Library (a.k.a. theASMLib support library). The Userspace Library is required and can be downloaded for free at:

32-bit (x86) Installations

oracleasmlib-2.0.4-1.el5.i386.rpm

64-bit (x86_64) Installations

oracleasmlib-2.0.4-1.el5.x86_64.rpm

After downloading the Userspace Library to both Oracle RAC nodes in the cluster, install it using the following:

[root@racnode1 ~]#

rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm

Preparing... ########################################### [100%]

1:oracleasmlib ########################################### [100%]

For information on obtaining the ASMLib support library through the Unbreakable Linux Network (which is not a requirement for this article), please visitGetting Oracle ASMLib via the Unbreakable Linux Network.

Configure ASMLib

Now that you have installed the ASMLib Packages for Linux, you need to configure and load the ASM kernel module. This task needs to be run on bothOracle RAC nodes as the root user account.

Note: The oracleasm command by default is in the path /usr/sbin. The /etc/init.d path, which was used in previous releases, is not

deprecated, but the oracleasm binary in that path is now used typically for internal commands. If you enter the command oracleasm configure

without the -i flag, then you are shown the current configuration. For example,

[root@racnode1 ~]#

/usr/sbin/oracleasm configureORACLEASM_ENABLED=false

ORACLEASM_UID=

ORACLEASM_GID=

ORACLEASM_SCANBOOT=true

ORACLEASM_SCANORDER=""

ORACLEASM_SCANEXCLUDE=""

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

20 of 24 3/6/2012 9:54 AM

Page 54: 11gR2 RAC Openfiler Install Page1 2 3

Enter the following command to run the oracleasm initialization script with the configure option:

[root@racnode1 ~]#

/usr/sbin/oracleasm configure -iConfiguring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library

driver. The following questions will determine whether the driver is

loaded on boot and what permissions it will have. The current values

will be shown in brackets ('[]'). Hitting <ENTER> without typing an

answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []:

gridDefault group to own the driver interface []:

asmadminStart Oracle ASM library driver on boot (y/n) [n]:

yScan for Oracle ASM disks on boot (y/n) [y]:

yWriting Oracle ASM library driver configuration: done

The script completes the following tasks:

Creates the /etc/sysconfig/oracleasm configuration file

Creates the /dev/oracleasm mount point

Mounts the ASMLib driver file system

Note: The ASMLib driver file system is not a regular file system. It is used only by the Automatic Storage Management library to communicatewith the Automatic Storage Management driver.

1.

2.

Enter the following command to load the oracleasm kernel module:

[root@racnode1 ~]#

/usr/sbin/oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

3.

4.

Repeat this procedure on all nodes in the cluster ( racnode2) where you want to install Oracle RAC.5.

Create ASM Disks for Oracle

Creating the ASM disks only needs to be performed from one node in the RAC cluster as the root user account. I will be running these commands on

racnode1. On the other Oracle RAC node(s), you will need to perform a scandisk to recognize the new volumes. When that is complete, you should

then run the oracleasm listdisks command on both Oracle RAC nodes to verify that all ASM disks were created and available.

In the section " Create Partitions on iSCSI Volumes", we configured (partitioned) three iSCSI volumes to be used by ASM. ASM will be used for storingOracle Clusterware files, Oracle database files like online redo logs, database files, control files, archived redo log files, and the Fast Recovery Area.Use the local device names that were created by udev when configuring the three ASM volumes.

To create the ASM disks using the iSCSI target names to local device name mappings, type the following:

[root@racnode1 ~]#

/usr/sbin/oracleasm createdisk CRSVOL1 /dev/iscsi/crs1/part1Writing disk header: done

Instantiating disk: done

[root@racnode1 ~]#

/usr/sbin/oracleasm createdisk DATAVOL1 /dev/iscsi/data1/part1Writing disk header: done

Instantiating disk: done

[root@racnode1 ~]#

/usr/sbin/oracleasm createdisk FRAVOL1 /dev/iscsi/fra1/part1Writing disk header: done

Instantiating disk: done

To make the disk available on the other nodes in the cluster ( racnode2), enter the following command as root on each node:

[root@racnode2 ~]#

/usr/sbin/oracleasm scandisksReloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "FRAVOL1"

Instantiating disk "DATAVOL1"

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

21 of 24 3/6/2012 9:54 AM

Page 55: 11gR2 RAC Openfiler Install Page1 2 3

Instantiating disk "CRSVOL1"

We can now test that the ASM disks were successfully created by using the following command on both nodes in the RAC cluster as the root user

account. This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks:

[root@racnode1 ~]#

/usr/sbin/oracleasm listdisksCRSVOL1

DATAVOL1

FRAVOL1

[root@racnode2 ~]#

/usr/sbin/oracleasm listdisksCRSVOL1

DATAVOL1

FRAVOL1

19. Download Oracle RAC 11g Release 2 Software

The following download procedures only need to be performed on one node in the cluster.

The next step is to download and extract the required Oracle software packages from the Oracle Technology Network (OTN):

Note: If you do not currently have an account with Oracle OTN, you will need to create one. This is a FREE account!

Oracle offers a development and testing license free of charge. No support, however, is provided and the license does not permitproduction use. A full description of the license agreement is available on OTN.

32-bit (x86) Installations

http://www.oracle.com/technology/software/products/database/oracle11g/112010_linuxsoft.html

64-bit (x86_64) Installations

http://www.oracle.com/technology/software/products/database/oracle11g/112010_linx8664soft.html

You will be downloading and extracting the required software from Oracle to only one of the Linux nodes in the cluster — namely, racnode1. You will

perform all Oracle software installs from this machine. The Oracle installer will copy the required software packages to all other nodes in the RACconfiguration using remote access ( scp).

Log in to the node that you will be performing all of the Oracle installations from ( racnode1) as the appropriate software owner. For example, login and

download the Oracle grid infrastructure software to the directory /home/grid/software/oracle as the grid user. Next, log in and download the

Oracle Database and Oracle Examples (optional) software to the /home/oracle/software/oracle directory as the oracle user.

Download and Extract the Oracle Software

Download the following software packages:

Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux

Oracle Database 11g Release 2 (11.2.0.1.0) for Linux

Oracle Database 11g Release 2 Examples (optional)

All downloads are available from the same page.

Extract the Oracle grid infrastructure software as the grid user:

[grid@racnode1 ~]$

mkdir -p /home/grid/software/oracle[grid@racnode1 ~]$

mv linux.x64_11gR2_grid.zip /home/grid/software/oracle[grid@racnode1 ~]$

cd /home/grid/software/oracle[grid@racnode1 oracle]$

unzip linux.x64_11gR2_grid.zip

Extract the Oracle Database and Oracle Examples software as the oracle user:

[oracle@racnode1 ~]$

mkdir -p /home/oracle/software/oracle[oracle@racnode1 ~]$

mv linux.x64_11gR2_database_1of2.zip /home/oracle/software/oracle[oracle@racnode1 ~]$

mv linux.x64_11gR2_database_2of2.zip /home/oracle/software/oracle[oracle@racnode1 ~]$

mv linux.x64_11gR2_examples.zip /home/oracle/software/oracle[oracle@racnode1 ~]$

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

22 of 24 3/6/2012 9:54 AM

Page 56: 11gR2 RAC Openfiler Install Page1 2 3

cd /home/oracle/software/oracle[oracle@racnode1 oracle]$

unzip linux.x64_11gR2_database_1of2.zip

[oracle@racnode1 oracle]$

unzip linux.x64_11gR2_database_2of2.zip[oracle@racnode1 oracle]$

unzip linux.x64_11gR2_examples.zip

20. Preinstallation Tasks for Oracle Grid Infrastructure for a Cluster

Perform the following checks on both Oracle RAC nodes in the cluster.

This section contains any remaining preinstallation tasks for Oracle grid infrastructure that have not already been discussed. Please note that manuallyrunning the Cluster Verification Utility (CVU) before running the Oracle installer is not required. The CVU is run automatically at the end of the Oracle gridinfrastructure installation as part of the Configuration Assistants process.

Install the cvuqdisk Package for Linux

Install the operating system package cvuqdisk to both Oracle RAC nodes. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks,

and you receive the error message "Package cvuqdisk not installed" when the Cluster Verification Utility is run (either manually or at the end of theOracle grid infrastructure installation). Use the cvuqdisk RPM for your hardware architecture (for example, x86_64, or i386).

The cvuqdisk RPM can be found on the Oracle grid infrastructure installation media in the rpm directory. For the purpose of this article, the Oracle grid

infrastructure media was extracted to the /home/grid/software/oracle/grid directory on racnode1 as the grid user.

To install the cvuqdisk RPM, complete the following procedures:

Locate the cvuqdisk RPM package, which is in the directory rpm on the installation media from racnode1:

[racnode1]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm

1.

2.

Copy the cvuqdisk package from racnode1 to racnode2 as the grid user account:

[racnode2]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm

3.

4.

Log in as root on both Oracle RAC nodes:

[grid@racnode1 rpm]$

su

[grid@racnode2 rpm]$

su

5.

6.

Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, which for this article is oinstall:

[root@racnode1 rpm]#

CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

[root@racnode2 rpm]#

CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

7.

8.

In the directory where you have saved the cvuqdisk RPM, use the following command to install the cvuqdisk package on both Oracle RAC

nodes:

[root@racnode1 rpm]#

rpm -iv cvuqdisk-1.0.7-1.rpmPreparing packages for installation...

cvuqdisk-1.0.7-1

[root@racnode2 rpm]#

rpm -iv cvuqdisk-1.0.7-1.rpmPreparing packages for installation...

cvuqdisk-1.0.7-1

9.

Verify Oracle Clusterware Requirements with CVU - (optional)

As stated earlier in this section, running the Cluster Verification Utility before running the Oracle installer is not required. Starting with Oracle Clusterware11g release 2, Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met, and creates shell scripts, called

fixup scripts, to finish incomplete system configuration steps. If OUI detects an incomplete task, then it generates fixup scripts ( runfixup.sh). You can

run the fixup script after you click the Fix and Check Again Button during the Oracle grid infrastructure installation.

You also can have CVU generate fixup scripts before installation.

If you decide that you want to run the CVU, please keep in mind that it should be run as the grid user from from the node you will be performing the

Oracle installation from ( racnode1). In addition, SSH connectivity with user equivalence must be configured for the grid user. If you intend to

configure SSH connectivity using the OUI, the CVU utility will fail before having the opportunity to perform any of its critical checks and generate the fixupscripts:

Checking user equivalence...

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

23 of 24 3/6/2012 9:54 AM

Page 57: 11gR2 RAC Openfiler Install Page1 2 3

Check: User equivalence for user "grid"

Node Name Comment

------------------------------------ ------------------------

racnode2

failed

racnode1

failed

Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:

User equivalence unavailable on all the specified nodes

Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Once all prerequisites for running the CVU utility have been met, you can now manually check your cluster configuration before installation and generatea fixup script to make operating system changes before starting the installation.

[grid@racnode1 ~]$

cd /home/grid/software/oracle/grid[grid@racnode1 grid]$

./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -fixup -verbose

Review the CVU report. The only failure that should be found given the configuration described in this article is:

Check: Membership of user "grid" in group "dba"

Node Name User Exists Group Exists User in Group Comment

---------------- ------------ ------------ ------------ ----------------

racnode2 yes yes no failed

racnode1 yes yes no failed

Result: Membership check for user "grid" in group "dba" failed

The check fails because this guide creates role-allocated groups and users by using a Job Role Separation configuration which is not accurately

recognized by the CVU. Creating a Job Role Separation configuration was described in the section Create Job Role Separation Operating SystemPrivileges Groups, Users, and Directories. The CVU fails to recognize this type of configuration and assumes the grid user should always be part of the

dba group. This failed check can be safely ignored. All other checks performed by CVU should be reported as "passed" before continuing with the Oracle

grid infrastructure installation.

Verify Hardware and Operating System Setup with CVU

The next CVU check to run will verify the hardware and operating system setup. Again, run the following as the grid user account from racnode1 with

user equivalence configured:

[grid@racnode1 ~]$

cd /home/grid/software/oracle/grid[grid@racnode1 grid]$

./runcluvfy.sh stage -post hwos -n racnode1,racnode2 -verbose

Review the CVU report. All checks performed by CVU should be reported as "passed" before continuing with the Oracle grid infrastructure installation.

Page 1 Page 2 Page 3

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-2-088...

24 of 24 3/6/2012 9:54 AM

Page 58: 11gR2 RAC Openfiler Install Page1 2 3

Page 1 Page 2 Page 3

Build Your Own Oracle RAC Cluster on Oracle Enterprise Linux and iSCSI (Continued)

The information in this guide is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is foreducational purposes only.

21. Install Oracle Grid Infrastructure for a Cluster

Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (racnode1). The Oracle grid infrastructure

software (Oracle Clusterware and Automatic Storage Management) will be installed to both of the Oracle RAC nodes in the cluster by the Oracle

Universal Installer.

You are now ready to install the "grid" part of the environment Oracle Clusterware and Automatic Storage Management. Complete thefollowing steps to install Oracle grid infrastructure on your cluster.

At any time during installation, if you have a question about what you are being asked to do, click the Help button on the OUI page.

Typical and Advanced Installation

Starting with 11g release 2, Oracle now provides two options for installing the Oracle grid infrastructure software:

Typical Installation

The typical installation option is a simplified installation with a minimal number of manual configuration choices. This new optionprovides streamlined cluster installations, especially for those customers who are new to clustering. Typical installation defaults asmany options as possible to those recommended as best practices.

Advanced Installation

The advanced installation option is an advanced procedure that requires a higher degree of system knowledge. It enables you to selectparticular configuration choices, including additional storage and network choices, use of operating system group authentication forrole-based administrative privileges, integration with IPMI, or more granularity in specifying Automatic Storage Management roles.

Given the fact that this article makes use of role-based administrative privileges and high granularity in specifying Automatic StorageManagement roles, we will be using the "Advanced Installation" option.

Configuring SCAN without DNS

For the purpose of this article, although I indicated I will be manually assigning IP addresses using the DNS method for name resolution (asopposed to GNS), I will not actually be defining the SCAN in any DNS server (or GNS for that matter). Instead, I will only be defining the SCANhost name and IP address in the hosts file ( /etc/hosts) on each Oracle RAC node and any clients attempting to connect to the database

cluster. Although Oracle strongly discourages this practice and highly recommends the use of GNS or DNS resolution, I felt it beyond the scopeof this article to configure DNS. This section includes a workaround (Ok, a total hack) to the nslookup binary that allows the Cluster Verification

Utility to finish successfully during the Oracle grid infrastructure install. Please note that the workaround documented in this section is only forthe sake of brevity and should not be considered for a production implementation.

Defining the SCAN in only the hosts file and not in either Grid Naming Service (GNS) or DNS is an invalid configuration and will cause theCluster Verification Utility to fail during the Oracle grid infrastructure installation:

Figure 17: Oracle Grid Infrastructure / CVU Error - (Configuring SCAN without DNS)

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

1 of 26 3/6/2012 9:55 AM

Page 59: 11gR2 RAC Openfiler Install Page1 2 3

INFO: Checking Single Client Access Name (SCAN)...

INFO: Checking name resolution setup for "racnode-cluster-scan"...

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 216.24.138.153) failed

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 192.168.1.187) failed

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racnode-cluster-scan"

INFO: Verification of SCAN VIP and Listener setup failed

Provided this is the only error reported by the CVU, it would be safe to ignore this check and continue by clicking the [Next] button in OUI and

move forward with the Oracle grid infrastructure installation. This is documented in Doc ID: 887471.1 on the My Oracle Support web site.

If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file, simply modify the nslookup

utility as root on both Oracle RAC nodes as follows.

First, rename the original nslookup binary to nslookup.original on both Oracle RAC nodes:

[root@racnode1 ~]#

mv /usr/bin/nslookup /usr/bin/nslookup.original

Next, create a new shell script named /usr/bin/nslookup as shown below while replacing 24.154.1.34 with your primary DNS, racnode-

cluster-scan with your SCAN host name, and 192.168.1.187 with your SCAN IP address:

#!/bin/bash

HOSTNAME=${1}

if [[ $HOSTNAME = "

racnode-cluster-scan" ]]; then

echo "Server:

24.154.1.34"

echo "Address:

24.154.1.34#53"

echo "Non-authoritative answer:"

echo "Name:

racnode-cluster-scan"

echo "Address:

192.168.1.187"

else

/usr/bin/nslookup.original $HOSTNAME

fi

Finally, change the new nslookup shell script to executable:

[root@racnode1 ~]#

chmod 755 /usr/bin/nslookup

Remember to perform these actions on both Oracle RAC nodes.

The new nslookup shell script simply echo's back your SCAN IP address whenever the CVU calls nslookup with your SCAN host name;

otherwise, it calls the original nslookup binary.

The CVU will now pass during the Oracle grid infrastructure installation when it attempts to verify your SCAN:

[grid@racnode1 ~]$

cluvfy comp scan -verbose

Verifying scan

Checking Single Client Access Name (SCAN)...

SCAN VIP name Node Running? ListenerName Port Running?

---------------- ------------ ------------ ------------ ------------ ------------

racnode-cluster-scan racnode1 true LISTENER 1521 true

Checking name resolution setup for "racnode-cluster-scan"...

SCAN Name IP Address Status Comment

------------ ------------------------ ------------------------ ----------

racnode-cluster-scan 192.168.1.187

passed

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

2 of 26 3/6/2012 9:55 AM

Page 60: 11gR2 RAC Openfiler Install Page1 2 3

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

===============================================================================

[grid@racnode2 ~]$

cluvfy comp scan -verbose

Verifying scan

Checking Single Client Access Name (SCAN)...

SCAN VIP name Node Running? ListenerName Port Running?

---------------- ------------ ------------ ------------ ------------ ------------

racnode-cluster-scan racnode1 true LISTENER 1521 true

Checking name resolution setup for "racnode-cluster-scan"...

SCAN Name IP Address Status Comment

------------ ------------------------ ------------------------ ----------

racnode-cluster-scan 192.168.1.187

passed

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

Verify Terminal Shell Environment

Before starting the Oracle Universal Installer, log in to racnode1 as the owner of the Oracle grid infrastructure software which for this article is

grid. Next, if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a

workstation configured with an X Server), verify your X11 display server settings which were described in the section, Logging In to a RemoteSystem Using X Terminal.

Install Oracle Grid Infrastructure

Perform the following tasks as the grid user to install Oracle grid infrastructure:

[grid@racnode1 ~]$

id

uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

[grid@racnode1 ~]$

DISPLAY=<your local workstation>:0.0

[grid@racnode1 ~]$

export DISPLAY

[grid@racnode1 ~]$

cd /home/grid/software/oracle/grid

[grid@racnode1 grid]$

./runInstaller

Screen Name Response Screen Shot

Select Installation Option Select " Install and Configure Grid Infrastructure for a Cluster"

Select Installation Type Select " Advanced Installation"

Select Product Languages Make the appropriate selection(s) for your environment.

Grid Plug and Play Information

Instructions on how to configure Grid Naming Service (GNS) is beyond thescope of this article. Un-check the option to "Configure GNS".

Cluster Name SCAN Name SCAN Port

racnode-cluster racnode-cluster-scan 1521

After clicking [Next], the OUI will attempt to validate the SCAN information:

Cluster Node Information

Use this screen to add the node racnode2 to the cluster and to configure SSH

connectivity.

Click the "Add" button to add " racnode2" and its virtual IP address "racnode2-vip" according to the table below:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

3 of 26 3/6/2012 9:55 AM

Page 61: 11gR2 RAC Openfiler Install Page1 2 3

Public Node Name Virtual Host Name

racnode1 racnode1-vip

racnode2 racnode2-vip

Next, click the [SSH Connectivity] button. Enter the "OS Password" for the

grid user and click the [Setup] button. This will start the "SSH Connectivity"

configuration process:

After the SSH configuration process successfully completes, acknowledge thedialog box.

Finish off this screen by clicking the [Test] button to verify passwordless

SSH connectivity.

Specify Network Interface Usage

Identify the network interface to be used for the "Public" and "Private"network. Make any changes necessary to match the values in the table below:

Interface Name Subnet Interface Type

eth0 192.168.1.0 Public

eth1 192.168.2.0 Private

Storage Option Information Select " Automatic Storage Management (ASM)".

Create ASM Disk Group

Create an ASM Disk Group that will be used to store the Oracle Clusterwarefiles according to the values in the table below:

Disk Group Name Redundancy Disk Path

CRS External ORCL:CRSVOL1

Specify ASM PasswordFor the purpose of this article, I choose to " Use same passwords for theseaccounts".

Failure Isolation SupportConfiguring Intelligent Platform Management Interface (IPMI) is beyond thescope of this article. Select " Do not use Intelligent Platform ManagementInterface (IPMI)".

Privileged Operating System Groups

This article makes use of role-based administrative privileges and highgranularity in specifying Automatic Storage Management roles using a JobRole Separation. configuration.

Make any changes necessary to match the values in the table below:

OSDBA for ASM OSOPER for ASM OSASM

asmdba asmoper asmadmin

Specify Installation Location

Set the "Oracle Base" ( $ORACLE_BASE) and "Software Location" (

$ORACLE_HOME) for the Oracle grid infrastructure installation:

Oracle Base: /u01/app/grid

Software Location: /u01/app/11.2.0/grid

Create Inventory

Since this is the first install on the host, you will need to create the OracleInventory. Use the default values provided by the OUI: Inventory Directory: /u01/app/oraInventory

oraInventory Group Name: oinstall

Prerequisite Checks

The installer will run through a series of checks to determine if both OracleRAC nodes meet the minimum requirements for installing and configuring theOracle Clusterware and Automatic Storage Management software.

Starting with Oracle Clusterware 11g release 2 (11.2), if any checks fail, theinstaller (OUI) will create shell script programs, called fixup scripts, to resolvemany incomplete system configuration requirements. If OUI detects anincomplete task that is marked "fixable", then you can easily fix the issue bygenerating the fixup script by clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to runthe script as root in a separate terminal session. When you run the script, it

raises kernel values to required minimums, if necessary, and completes otheroperating system configuration tasks.

If all prerequisite checks pass (as was the case for my install), the OUIcontinues to the Summary screen.

Summary Click [Finish] to start the installation.

SetupThe installer performs the Oracle grid infrastructure setup process on bothOracle RAC nodes.

Execute Configuration scripts

After the installation completes, you will be prompted to run the /u01/app

/oraInventory/orainstRoot.sh and /u01/app/11.2.0/grid/root.sh

scripts. Open a new console window on both Oracle RAC nodes in the cluster,(starting with the node you are performing the install from), as the root user

account.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

4 of 26 3/6/2012 9:55 AM

Page 62: 11gR2 RAC Openfiler Install Page1 2 3

Run the orainstRoot.sh script on both nodes in the RAC cluster:

[root@racnode1 ~]#

/u01/app/oraInventory/orainstRoot.sh

[root@racnode2 ~]#

/u01/app/oraInventory/orainstRoot.sh

Within the same new console window on both Oracle RAC nodes in the cluster,(starting with the node you are performing the install from), stay logged in asthe root user account. Run the root.sh script on both nodes in the RAC

cluster one at a time starting with the node you are performing the install from:

[root@racnode1 ~]#

/u01/app/11.2.0/grid/root.sh

[root@racnode2 ~]#

/u01/app/11.2.0/grid/root.sh

The root.sh script can take several minutes to run. When running root.sh on

the last node, you will receive output similar to the following which signifies asuccessful install:

...

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

Go back to OUI and acknowledge the "Execute Configuration scripts" dialogwindow.

Configure Oracle Grid Infrastructure for aCluster

The installer will run configuration assistants for Oracle Net Services (NETCA),Automatic Storage Management (ASMCA), and Oracle Private Interconnect(VIPCA). The final step performed by OUI is to run the Cluster VerificationUtility (CVU). If the configuration assistants and CVU run successfully, you canexit OUI by clicking [Next] and then [Close].

As described earlier in this section, if you configured SCAN "only" in yourhosts file ( /etc/hosts) and not in either Grid Naming Service (GNS) or

manually using DNS, this is considered an invalid configuration and will causethe Cluster Verification Utility to fail.

Provided this is the only error reported by the CVU, it would be safe to ignorethis check and continue by clicking [Next] and then the [Close] button to

exit the OUI. This is documented in Doc ID: 887471.1 on the My Oracle Supportweb site.

If on the other hand you want the CVU to complete successfully while still onlydefining the SCAN in the hosts file, do not click the [Next] button in OUI to

bypass the error. Instead, follow the instructions in section Configuring SCANwithout DNS to modify the nslookup utility. After completing the steps

document in that section, return to the OUI and click the [Retry] button. The

CVU should now finish with no errors. Click [Next] and then [Close] to exit

the OUI.

Finish At the end of the installation, click the [Close] button to exit the OUI.

Caution: After installation is complete, do not remove manually or run cron jobs that remove /tmp/.oracle or /var/tmp/.oracle or its files

while Oracle Clusterware is up. If you remove these files, then Oracle Clusterware could encounter intermittent hangs, and you will encountererror CRS-0184: Cannot communicate with the CRS daemon.

22. Postinstallation Tasks for Oracle Grid Infrastructure for a Cluster

Perform the following postinstallation procedures on both Oracle RAC nodes in the cluster.

Verify Oracle Clusterware Installation

After the installation of Oracle grid infrastructure, you should run through several tests to verify the install was successful. Run the followingcommands on both nodes in the RAC cluster as the grid user.

Check CRS Status

[grid@racnode1 ~]$

crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

5 of 26 3/6/2012 9:55 AM

Page 63: 11gR2 RAC Openfiler Install Page1 2 3

Check Clusterware Resources

Note: The crs_stat command is deprecated in Oracle Clusterware 11g release 2 (11.2).

[grid@racnode1 ~]$

crs_stat -t -v

Name Type R/RA F/FT Target State Host

----------------------------------------------------------------------

ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1

ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1

ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1

ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1

ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1

ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE

ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1

ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE

ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1

ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1

ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1

ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE

ora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1

ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1

ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2

ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2

ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE

ora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2

ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2

ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE racnode1

ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

Check Cluster Nodes

[grid@racnode1 ~]$

olsnodes -n

racnode1 1

racnode2 2

Check Oracle TNS Listener Process on Both Nodes

[grid@racnode1 ~]$

ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'

LISTENER_SCAN1

LISTENER

[grid@racnode2 ~]$

ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'

LISTENER

Confirming Oracle ASM Function for Oracle Clusterware Files

If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the Grid Infrastructureinstallation owner to confirm that your Oracle ASM installation is running:

[grid@racnode1 ~]$

srvctl status asm -a

ASM is running on racnode1,racnode2

ASM is enabled.

Check Oracle Cluster Registry (OCR)

[grid@racnode1 ~]$

ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 2404

Available space (kbytes) : 259716

ID : 1259866904

Device/File Name : +CRS

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

6 of 26 3/6/2012 9:55 AM

Page 64: 11gR2 RAC Openfiler Install Page1 2 3

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

Check Voting Disk

[grid@racnode1 ~]$

crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]

Located 1 voting disk(s).

Note: To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or later installations, use the srvctl binary in the Oracle grid infrastructure

home for a cluster (Grid home). When we install Oracle Real Application Clusters (the Oracle database software), you cannot use the srvctl

binary in the database home to manage Oracle ASM or Oracle Net which reside in the Oracle grid infrastructure home.

Voting Disk Management

In prior releases, it was highly recommended to back up the voting disk using the dd command after installing the Oracle Clusterware software.

With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the dd is not supported and may result in the loss

of the voting disk.

Backing up the voting disks in Oracle Clusterware 11g release 2 is no longer required. The voting disk data is automatically backed up in OCRas part of any configuration change and is automatically restored to any voting disk added.

To learn more about managing the voting disks, Oracle Cluster Registry (OCR), and Oracle Local Registry (OLR), please refer to the OracleClusterware Administration and Deployment Guide 11g Release 2 (11.2) .

Back Up the root.sh Script

Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle

home directory, then the installer updates the contents of the existing root.sh script during the installation. If you require information

contained in the original root.sh script, then you can recover it from the root.sh file copy.

Back up the root.sh file on both Oracle RAC nodes as root:

[root@racnode1 ~]#

cd /u01/app/11.2.0/grid

[root@racnode1 grid]#

cp root.sh root.sh.racnode1.AFTER_INSTALL_NOV-20-2009

[root@racnode2 ~]#

cd /u01/app/11.2.0/grid

[root@racnode2 grid]#

cp root.sh root.sh.racnode2.AFTER_INSTALL_NOV-20-2009

Install Cluster Health Management Software - (Optional)

To address troubleshooting issues, Oracle recommends that you install Instantaneous Problem Detection OS Tool (IPD/OS) if you are usingLinux kernel 2.6.9 or higher. This article was written using Oracle Enterprise Linux 5 update 4 which uses the 2.6.18 kernel:

[root@racnode1 ~]#

uname -a

Linux racnode1 2.6.18-164.el5 #1 SMP Thu Sep 3 04:15:13 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

If you are using a Linux kernel earlier than 2.6.9, then you would use OS Watcher and RACDDT which is available through the My OracleSupport website (formerly Metalink).

The IPD/OS tool is designed to detect and analyze operating system and cluster resource-related degradation and failures. The tool can providebetter explanations for many issues that occur in clusters where Oracle Clusterware, Oracle ASM and Oracle RAC are running, such as nodeevictions. It tracks the operating system resource consumption at each node, process, and device level continuously. It collects and analyzescluster-wide data. In real time mode, when thresholds are reached, an alert is shown to the operator. For root cause analysis, historical data canbe replayed to understand what was happening at the time of failure.

Instructions for installing and configuring the IPD/OS tool is beyond the scope of this article and will not be discussed. You can download theIPD/OS tool along with a detailed installation and configuration guide at the following URL:

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

7 of 26 3/6/2012 9:55 AM

Page 65: 11gR2 RAC Openfiler Install Page1 2 3

http://www.oracle.com/technology/products/database/clustering/ipd_download_homepage.html

23. Create ASM Disk Groups for Data and Fast Recovery Area

Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (racnode1) to create the additional ASM disk

groups which will be used to create the clustered database.

During the installation of Oracle grid infrastructure, we configured one ASM disk group named +CRS which was used to store the Oracle

clusterware files (OCR and voting disk).

In this section, we will create two additional ASM disk groups using the ASM Configuration Assistant ( asmca). These new ASM disk groups will

be used later in this guide when creating the clustered database.

The first ASM disk group will be named +RACDB_DATA and will be used to store all Oracle physical database files (data, online redo logs, control

files, archived redo logs). A second ASM disk group will be created for the Fast Recovery Area named +FRA.

Verify Terminal Shell Environment

Before starting the ASM Configuration Assistant, log in to racnode1 as the owner of the Oracle grid infrastructure software which for this article

is grid. Next, if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a

workstation configured with an X Server), verify your X11 display server settings which were described in the section, Logging In to a RemoteSystem Using X Terminal.

Create Additional ASM Disk Groups using ASMCA

Perform the following tasks as the grid user to create two additional ASM disk groups:

[grid@racnode1 ~]$

asmca &

Screen Name Response Screen Shot

Disk Groups From the "Disk Groups" tab, click the " Create" button.

Create Disk Group

The "Create Disk Group" dialog should show two of the ASMLib volumes we created earlier inthis guide.

If the ASMLib volumes we created earlier in this article do not show up in the "Select MemberDisks" window as eligible ( ORCL:DATAVOL1 and ORCL:FRAVOL1) then click on the "Change

Disk Discovery Path" button and input " ORCL:*".

When creating the "Data" ASM disk group, use " RACDB_DATA" for the "Disk Group Name". Inthe "Redundancy" section, choose " External (none)". Finally, check the ASMLib volume "ORCL:DATAVOL1" in the "Select Member Disks" section.

After verifying all values in this dialog are correct, click the " [OK]" button.

Disk GroupsAfter creating the first ASM disk group, you will be returned to the initial dialog. Click the "Create" button again to create the second ASM disk group.

Create Disk Group

The "Create Disk Group" dialog should now show the final remaining ASMLib volume.

When creating the "Fast Recovery Area" disk group, use " FRA" for the "Disk Group Name". Inthe "Redundancy" section, choose " External (none)". Finally, check the ASMLib volume "ORCL:FRAVOL1" in the "Select Member Disks" section.

After verifying all values in this dialog are correct, click the " [OK]" button.

Disk Groups Exit the ASM Configuration Assistant by clicking the [Exit] button.

24. Install Oracle Database 11g with Oracle Real Application Clusters

Perform the Oracle Database software installation from only one of the Oracle RAC nodes in the cluster (racnode1)! The Oracle Database software

will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH.

Now that the grid infrastructure software is functional, you can install the Oracle Database software on the one node in your cluster ( racnode1)

as the oracle user. OUI copies the binary files from this node to all the other node in the cluster during the installation process.

For the purpose of this guide, we will forgo the "Create Database" option when installing the Oracle Database software. The clustered databasewill be created later in this guide using the Database Configuration Assistant (DBCA) after all installs have been completed.

Verify Terminal Shell Environment

Before starting the Oracle Universal Installer (OUI), log in to racnode1 as the owner of the Oracle Database software which for this article is

oracle. Next, if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a

workstation configured with an X Server), verify your X11 display server settings which were described in the section, Logging In to a RemoteSystem Using X Terminal.

Install Oracle Database 11g Release 2 Software

Perform the following tasks as the oracle user to install the Oracle Database software:

[oracle@racnode1 ~]$

id

uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

8 of 26 3/6/2012 9:55 AM

Page 66: 11gR2 RAC Openfiler Install Page1 2 3

[oracle@racnode1 ~]$

DISPLAY=<your local workstation>:0.0

[oracle@racnode1 ~]$

export DISPLAY

[oracle@racnode1 ~]$

cd /home/oracle/software/oracle/database

[oracle@racnode1 database]$

./runInstaller

Screen Name Response Screen Shot

Configure Security Updates

For the purpose of this article, un-check the security updates checkbox andclick the [Next] button to continue. Acknowledge the warning dialog

indicating you have not provided an email address by clicking the [Yes]

button.

Installation Option Select " Install database software only".

Grid Options

Select the " Real Application Clusters database installation" radio button(default) and verify that both Oracle RAC nodes are checked in the "NodeName" window.

Next, click the [SSH Connectivity] button. Enter the "OS Password" for

the oracle user and click the [Setup] button. This will start the "SSH

Connectivity" configuration process:

After the SSH configuration process successfully completes, acknowledgethe dialog box.

Finish off this screen by clicking the [Test] button to verify passwordless

SSH connectivity.

Product Languages Make the appropriate selection(s) for your environment.

Database Edition Select " Enterprise Edition".

Installation Location

Specify the Oracle base and Software location (Oracle_home) as follows: Oracle Base: /u01/app/oracle

Software Location: /u01/app/oracle/product/11.2.0/dbhome_1

Operating System Groups

Select the OS groups to be used for the SYSDBA and SYSOPER privileges: Database Administrator (OSDBA) Group: dba

Database Operator (OSOPER) Group: oper

Prerequisite Checks

The installer will run through a series of checks to determine if both OracleRAC nodes meet the minimum requirements for installing and configuringthe Oracle Database software.

Starting with 11g release 2 (11.2), if any checks fail, the installer (OUI) willcreate shell script programs, called fixup scripts, to resolve many incompletesystem configuration requirements. If OUI detects an incomplete task that ismarked "fixable", then you can easily fix the issue by generating the fixupscript by clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to runthe script as root in a separate terminal session. When you run the script, it

raises kernel values to required minimums, if necessary, and completesother operating system configuration tasks.

If all prerequisite checks pass (as was the case for my install), the OUIcontinues to the Summary screen.

Summary Click [Finish] to start the installation.

Install ProductThe installer performs the Oracle Database software installation process onboth Oracle RAC nodes.

Execute Configuration scripts

After the installation completes, you will be prompted to run the /u01/app

/oracle/product/11.2.0/dbhome_1/root.sh script on both Oracle RAC

nodes. Open a new console window on both Oracle RAC nodes in the cluster,(starting with the node you are performing the install from), as the root user

account.

Run the root.sh script on all nodes in the RAC cluster:

[root@racnode1 ~]#

/u01/app/oracle/product/11.2.0/dbhome_1/root.sh

[root@racnode2 ~]#

/u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

9 of 26 3/6/2012 9:55 AM

Page 67: 11gR2 RAC Openfiler Install Page1 2 3

Go back to OUI and acknowledge the "Execute Configuration scripts" dialogwindow.

Finish At the end of the installation, click the [Close] button to exit the OUI.

25. Install Oracle Database 11g Examples (formerly Companion)

Perform the Oracle Database 11g Examples software installation from only one of the Oracle RAC nodes in the cluster (racnode1)! The Oracle

Database Examples software will be installed to both of Oracle RAC nodes in the cluster by the Oracle Universal Installer using SSH.

Now that the Oracle Database 11g software is installed, you have the option to install the Oracle Database 11g Examples. Like the OracleDatabase software install, the Examples software is only installed from one node in your cluster ( racnode1) as the oracle user. OUI copies

the binary files from this node to all the other node in the cluster during the installation process.

Verify Terminal Shell Environment

Before starting the Oracle Universal Installer (OUI), log in to racnode1 as the owner of the Oracle Database software which for this article is

oracle. Next, if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to racnode1 from a

workstation configured with an X Server), verify your X11 display server settings which were described in the section, Logging In to a RemoteSystem Using X Terminal.

Install Oracle Database 11g Release 2 Examples

Perform the following tasks as the oracle user to install the Oracle Database Examples:

[oracle@racnode1 ~]$

cd /home/oracle/software/oracle/examples

[oracle@racnode1 examples]$

./runInstaller

Screen Name Response Screen Shot

Installation Location

Specify the Oracle base and Software location (Oracle_home) as follows: Oracle Base: /u01/app/oracle

Software Location: /u01/app/oracle/product/11.2.0/dbhome_1

Prerequisite Checks

The installer will run through a series of checks to determine if both OracleRAC nodes meet the minimum requirements for installing and configuringthe Oracle Database Examples software.

Starting with 11g release 2 (11.2), if any checks fail, the installer (OUI) willcreate shell script programs, called fixup scripts, to resolve many incompletesystem configuration requirements. If OUI detects an incomplete task that ismarked "fixable", then you can easily fix the issue by generating the fixupscript by clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to runthe script as root in a separate terminal session. When you run the script, it

raises kernel values to required minimums, if necessary, and completesother operating system configuration tasks.

If all prerequisite checks pass (as was the case for my install), the OUIcontinues to the Summary screen.

Summary Click [Finish] to start the installation.

Install ProductThe installer performs the Oracle Database Examples software installationprocess on both Oracle RAC nodes.

Finish At the end of the installation, click the [Close] button to exit the OUI.

26. Create the Oracle Cluster Database

The database creation process should only be performed from one of the Oracle RAC nodes in the cluster (racnode1).

Use the Oracle Database Configuration Assistant (DBCA) to create the clustered database.

Before executing the DBCA, make certain that the $ORACLE_HOME and $PATH are set appropriately for the $ORACLE_BASE/product/11.2.0

/dbhome_1 environment. Setting environment variables in the login script for the oracle user account was covered in Section 13.

You should also verify that all services we have installed up to this point (Oracle TNS listener, Oracle Clusterware processes, etc.) are runningbefore attempting to start the clustered database creation process:

[oracle@racnode1 ~]$

su - grid -c "crs_stat -t -v"

Password:

*********

Name Type R/RA F/FT Target State Host

----------------------------------------------------------------------

ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1

ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1

ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

10 of 26 3/6/2012 9:55 AM

Page 68: 11gR2 RAC Openfiler Install Page1 2 3

ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1

ora....DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1

ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1

ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1

ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE

ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1

ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE

ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1

ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1

ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1

ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINE

ora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1

ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1

ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2

ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2

ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINE

ora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2

ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2

ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE racnode1

ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

Verify Terminal Shell Environment

Before starting the Database Configuration Assistant (DBCA), log in to racnode1 as the owner of the Oracle Database software which for this

article is oracle. Next, if you are using a remote client to connect to the Oracle RAC node performing the installation (SSH or Telnet to

racnode1 from a workstation configured with an X Server), verify your X11 display server settings which were described in the section,

Logging In to a Remote System Using X Terminal.

Create the Clustered Database

To start the database creation process, run the following as the oracle user:

[oracle@racnode1 ~]$

dbca &

Screen Name Response Screen Shot

Welcome Screen Select Oracle Real Application Clusters database.

Operations Select Create a Database.

Database Templates Select Custom Database.

Database Identification

Cluster database configuration. Configuration Type: Admin-Managed

Database naming. Global Database Name: racdb.idevelopment.info

SID Prefix: racdb

Note: I used idevelopment.info for the database domain. You may use any

database domain. Keep in mind that this domain does not have to be a validDNS domain.

Node Selection.Click the [Select All] button to select all servers: racnode1 and

racnode2.

Management OptionsLeave the default options here, which is to Configure Enterprise Manager /Configure Database Control for local management.

Database CredentialsI selected to Use the Same Administrative Password for All Accounts. Enterthe password (twice) and make sure the password does not start with a digitnumber.

Database File Locations

Specify storage type and locations for database files. Storage Type: Automatic Storage Management (ASM)

Storage Locations: Use Oracle-Managed Files

Database Area: +RACDB_DATA

Specify ASMSNMP Password Specify the ASMSNMP password for the ASM instance.

Recovery Configuration

Check the option for Specify Fast Recovery Area.

For the Fast Recovery Area, click the [Browse] button and select the disk

group name +FRA.

My disk group has a size of about 33GB. When defining the Fast RecoveryArea size, use the entire volume minus 10% for overhead (33-10%=30 GB).I used a Fast Recovery Area Size of 30 GB ( 30413 MB).

Database Content

I left all of the Database Components (and destination tablespaces) set to

their default value although it is perfectly OK to select the Sample Schemas.This option is available since we installed the Oracle Database 11g Examples.

Initialization ParametersChange any parameters for your environment. I left them all at their defaultsettings.

Database StorageChange any parameters for your environment. I left them all at their defaultsettings.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

11 of 26 3/6/2012 9:55 AM

Page 69: 11gR2 RAC Openfiler Install Page1 2 3

Creation Options

Keep the default option Create Database selected. I also always select toGenerate Database Creation Scripts. Click Finish to start the databasecreation process. After acknowledging the database creation report andscript generation dialog, the database creation will start.

Click OK on the "Summary" screen.

End of Database Creation At the end of the database creation, exit from the DBCA.

When the DBCA has completed, you will have a fully functional Oracle RAC cluster running!

Verify Clustered Database is Open

[oracle@racnode1 ~]$

su - grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t"

Password:

*********

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.CRS.dg

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.FRA.dg

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.LISTENER.lsnr

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.RACDB_DATA.dg

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.asm

ONLINE ONLINE racnode1 Started

ONLINE ONLINE racnode2 Started

ora.eons

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.gsd

OFFLINE OFFLINE racnode1

OFFLINE OFFLINE racnode2

ora.net1.network

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.ons

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

ora.registry.acfs

ONLINE ONLINE racnode1

ONLINE ONLINE racnode2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE racnode1

ora.oc4j

1 OFFLINE OFFLINE

ora.racdb.db

1 ONLINE ONLINE racnode1 Open

2 ONLINE ONLINE racnode2 Open

ora.racnode1.vip

1 ONLINE ONLINE racnode1

ora.racnode2.vip

1 ONLINE ONLINE racnode2

ora.scan1.vip

1 ONLINE ONLINE racnode1

Oracle Enterprise Manager

If you configured Oracle Enterprise Manager (Database Control), it can be used to view the database configuration and current status of thedatabase.

The URL for this example is: https://racnode1:1158/em

[oracle@racnode1 ~]$

emctl status dbconsole

Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0

Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.

https://racnode1:1158/em/console/aboutApplication

Oracle Enterprise Manager 11g is running.

------------------------------------------------------------------

Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/log

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

12 of 26 3/6/2012 9:55 AM

Page 70: 11gR2 RAC Openfiler Install Page1 2 3

Figure 18: Oracle Enterprise Manager - (Database Console)

27. Post Database Creation Tasks - (Optional)

This section offers several optional tasks that can be performed on your new Oracle 11g in order to enhance availability as well as database

management.

Re-compile Invalid Objects

Run the utlrp.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed for the first time. This step

is optional but recommended.

[oracle@racnode1 ~]$

sqlplus / as sysdba

SQL>

@?/rdbms/admin/utlrp.sql

Enabling Archive Logs in a RAC Environment

Whether a single instance or clustered database, Oracle tracks and logs all changes to database blocks in online redolog files. In an Oracle RACenvironment, each instance will have its own set of online redolog files known as a thread. Each Oracle instance will use its group of onlineredologs in a circular manner. Once an online redolog fills, Oracle moves to the next one. If the database is in "Archive Log Mode", Oracle willmake a copy of the online redo log before it gets reused. A thread must contain at least two online redologs (or online redolog groups). Thesame holds true for a single instance configuration. The single instance must contain at least two online redologs (or online redolog groups).

The size of an online redolog file is completely independent of another instance's' redolog size. Although in most configurations the size is thesame, it may be different depending on the workload and backup / recovery considerations for each node. It is also worth mentioning that eachinstance has exclusive write access to its own online redolog files. In a correctly configured RAC environment, however, each instance canread another instance's current online redolog file to perform instance recovery if that instance was terminated abnormally. It is therefore arequirement that online redo logs be located on a shared storage device (just like the database files).

As already mentioned, Oracle writes to its online redolog files in a circular manner. When the current online redolog fills, Oracle will switch tothe next one. To facilitate media recovery, Oracle allows the DBA to put the database into "Archive Log Mode" which makes a copy of theonline redolog after it fills (and before it gets reused). This is a process known as archiving.

The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log mode, however most DBA's opt tobypass this option during initial database creation. In cases like this where the database is in no archive log mode, it is a simple task to put thedatabase into archive log mode. Note however that this will require a short database outage. From one of the nodes in the Oracle RACconfiguration, use the following tasks to put a RAC enabled database into archive log mode. For the purpose of this article, I will use the noderacnode1 which runs the racdb1 instance:

Log in to one of the nodes (i.e. racnode1) as oracle and disable the cluster instance parameter by setting cluster_database to FALSE

from the current instance:

[oracle@racnode1 ~]$

sqlplus / as sysdba

SQL>

alter system set cluster_database=false scope=spfile sid='racdb1';

1.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

13 of 26 3/6/2012 9:55 AM

Page 71: 11gR2 RAC Openfiler Install Page1 2 3

System altered.

2.Shutdown all instances accessing the clustered database as the oracle user:

[oracle@racnode1 ~]$

srvctl stop database -d racdb

3.

4.Using the local instance, MOUNT the database:

[oracle@racnode1 ~]$

sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 21 19:26:47 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

startup mount

ORACLE instance started.

Total System Global Area 1653518336 bytes

Fixed Size 2213896 bytes

Variable Size 1073743864 bytes

Database Buffers 570425344 bytes

Redo Buffers 7135232 bytes

5.

6.Enable archiving:

SQL>

alter database archivelog;

Database altered.

7.

8.Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from the current instance:

SQL>

alter system set cluster_database=true scope=spfile sid='racdb1';

System altered.

9.

10.Shutdown the local instance:

SQL>

shutdown immediate

ORA-01109: database not open

Database dismounted.

ORACLE instance shut down.

11.

12.Bring all instance back up as the oracle account using srvctl:

[oracle@racnode1 ~]$

srvctl start database -d racdb

13.

14.Login to the local instance and verify Archive Log Mode is enabled:

[oracle@racnode1 ~]$

sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 21 19:33:38 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

SQL>

archive log list

15.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

14 of 26 3/6/2012 9:55 AM

Page 72: 11gR2 RAC Openfiler Install Page1 2 3

Database log mode

Archive Mode

Automatic archival

Enabled

Archive destination USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence 69

Next log sequence to archive 70

Current log sequence 70

After enabling Archive Log Mode, each instance in the RAC configuration can automatically archive redologs!

Download and Install Custom Oracle Database Scripts

DBA's rely on Oracle's data dictionary views and dynamic performance views in order to support and better manage their databases. Althoughthese views provide a simple and easy mechanism to query critical information regarding the database, it helps to have a collection of accurateand readily available SQL scripts to query these views.

In this section you will download and install a collection of Oracle DBA scripts that can be used to manage many aspects of your databaseincluding space management, performance, backups, security, and session management. The Oracle DBA scripts archive can be downloadedusing the following link http://www.idevelopment.info/data/Oracle/DBA_scripts/common.zip. As the oracle user account, download the

common.zip archive to the $ORACLE_BASE directory of each node in the cluster. For the purpose of this example, the common.zip archive will

be copied to /u01/app/oracle. Next, unzip the archive file to the $ORACLE_BASE directory.

For example, perform the following on both nodes in the Oracle RAC cluster as the oracle user account:

[oracle@racnode1 ~]$

mv common.zip /u01/app/oracle

[oracle@racnode1 ~]$

cd /u01/app/oracle

[oracle@racnode1 ~]$

unzip common.zip

The final step is to verify (or set) the appropriate environment variable for the current UNIX shell to ensure the Oracle SQL scripts can be runfrom within SQL*Plus while in any directory. For UNIX, verify the following environment variable is set and included in your login shell script:

ORACLE_PATH=

$ORACLE_BASE/common/oracle/sql:.:$ORACLE_HOME/rdbms/admin

export ORACLE_PATH

Note: The ORACLE_PATH environment variable should already be set in the .bash_profile login script that was created in the section Create

Login Script for the oracle User Account.

Now that the Oracle DBA scripts have been unzipped and the UNIX environment variable ( $ORACLE_PATH) has been set to the appropriate

directory, you should now be able to run any of the SQL scripts in your $ORACLE_BASE/common/oracle/sql while logged into SQL*Plus. For

example, to query tablespace information while logged into the Oracle database as a DBA user:

SQL>

@dba_tablespaces

Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used

------- ----------------- ------------ ---------- --------- ---------------- ---------------- ---------

ONLINE SYSAUX PERMANENT LOCAL AUTO 629,145,600 511,967,232 81

ONLINE UNDOTBS1 UNDO LOCAL MANUAL 1,059,061,760 948,043,776 90

ONLINE USERS PERMANENT LOCAL AUTO 5,242,880 1,048,576 20

ONLINE SYSTEM PERMANENT LOCAL MANUAL 734,003,200 703,135,744 96

ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54

ONLINE UNDOTBS2 UNDO LOCAL MANUAL 209,715,200 20,840,448 10

ONLINE TEMP TEMPORARY LOCAL MANUAL 75,497,472 66,060,288 88

---------------- ---------------- ---------

avg 63

sum 2,869,952,512 2,336,227,328

7 rows selected.

To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus, run the help.sql script:

SQL>

@help.sql

========================================

Automatic Shared Memory Management

========================================

asmm_components.sql

========================================

Automatic Storage Management

========================================

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

15 of 26 3/6/2012 9:55 AM

Page 73: 11gR2 RAC Openfiler Install Page1 2 3

asm_alias.sql

asm_clients.sql

asm_diskgroups.sql

asm_disks.sql

asm_disks_perf.sql

asm_drop_files.sql

asm_files.sql

asm_files2.sql

asm_templates.sql

< --- SNIP --- >

perf_top_sql_by_buffer_gets.sql

perf_top_sql_by_disk_reads.sql

========================================

Workspace Manager

========================================

wm_create_workspace.sql

wm_disable_versioning.sql

wm_enable_versioning.sql

wm_freeze_workspace.sql

wm_get_workspace.sql

wm_goto_workspace.sql

wm_merge_workspace.sql

wm_refresh_workspace.sql

wm_remove_workspace.sql

wm_unfreeze_workspace.sql

wm_workspaces.sql

28. Create / Alter Tablespaces

When creating the clustered database, we left all tablespaces set to their default size. If you are using a large drive for the shared storage, youmay want to make a sizable testing database.

Below are several optional SQL commands for modifying and creating all tablespaces for the test database. Please keep in mind that thedatabase file names (OMF files) used in this example may differ from what the Oracle Database Configuration Assistant (DBCA) creates for yourenvironment. When working through this section, substitute the data file names that were created in your environment where appropriate. Thefollowing query can be used to determine the file names for your environment:

SQL>

select tablespace_name, file_name

2

from dba_data_files

3

union

4

select tablespace_name, file_name

5

from dba_temp_files;

TABLESPACE_NAME FILE_NAME

--------------- --------------------------------------------------

EXAMPLE +RACDB_DATA/racdb/datafile/example.263.703530435

SYSAUX +RACDB_DATA/racdb/datafile/sysaux.260.703530411

SYSTEM +RACDB_DATA/racdb/datafile/system.259.703530397

TEMP +RACDB_DATA/racdb/tempfile/temp.262.703530429

UNDOTBS1 +RACDB_DATA/racdb/datafile/undotbs1.261.703530423

UNDOTBS2 +RACDB_DATA/racdb/datafile/undotbs2.264.703530441

USERS +RACDB_DATA/racdb/datafile/users.265.703530447

7 rows selected.

[oracle@racnode1 ~]$

sqlplus "/ as sysdba"

SQL>

create user scott identified by tiger default tablespace users;

User created.

SQL>

grant dba, resource, connect to scott;

Grant succeeded.

SQL>

alter database datafile '+RACDB_DATA/racdb/datafile/users.265.703530447' resize 1024m;

Database altered.

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

16 of 26 3/6/2012 9:55 AM

Page 74: 11gR2 RAC Openfiler Install Page1 2 3

SQL>

alter tablespace users add datafile '+RACDB_DATA' size 1024m autoextend off;

Tablespace altered.

SQL>

create tablespace indx datafile '+RACDB_DATA' size 1024m

2

autoextend on next 100m maxsize unlimited

3

extent management local autoallocate

4

segment space management auto;

Tablespace created.

SQL>

alter database datafile '+RACDB_DATA/racdb/datafile/system.259.703530397' resize 1024m;

Database altered.

SQL>

alter database datafile '+RACDB_DATA/racdb/datafile/sysaux.260.703530411' resize 1024m;

Database altered.

SQL>

alter database datafile '+RACDB_DATA/racdb/datafile/undotbs1.261.703530423' resize 1024m;

Database altered.

SQL>

alter database datafile '+RACDB_DATA/racdb/datafile/undotbs2.264.703530441' resize 1024m;

Database altered.

SQL>

alter database tempfile '+RACDB_DATA/racdb/tempfile/temp.262.703530429' resize 1024m;

Database altered.

Here is a snapshot of the tablespaces I have defined for my test database environment:

Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used

------- ----------------- ------------ ---------- --------- ---------------- ---------------- ---------

ONLINE SYSAUX PERMANENT LOCAL AUTO 1,073,741,824 512,098,304 48

ONLINE UNDOTBS1 UNDO LOCAL MANUAL 1,073,741,824 948,043,776 88

ONLINE USERS PERMANENT LOCAL AUTO 2,147,483,648 2,097,152 0

ONLINE SYSTEM PERMANENT LOCAL MANUAL 1,073,741,824 703,201,280 65

ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54

ONLINE INDX PERMANENT LOCAL AUTO 1,073,741,824 1,048,576 0

ONLINE UNDOTBS2 UNDO LOCAL MANUAL 1,073,741,824 20,840,448 2

ONLINE TEMP TEMPORARY LOCAL MANUAL 1,073,741,824 66,060,288 6

---------------- ---------------- ---------

avg 33

sum 8,747,220,992 2,338,521,088

8 rows selected.

29. Verify Oracle Grid Infrastructure and Database Configuration

The following Oracle Clusterware and Oracle RAC verification checks can be performed on any of the Oracle RAC nodes in the cluster. For thepurpose of this article, I will only be performing checks from racnode1 as the oracle OS user.

Most of the checks described in this section use the Server Control Utility (SRVCTL) and can be run as either the oracle or grid OS user.

There are five node-level tasks defined for SRVCTL:

Adding and deleting node-level applicationsSetting and un-setting the environment for node-level applicationsAdministering node applicationsAdministering ASM instancesStarting and stopping a group of programs that includes virtual IP addresses, listeners, Oracle Notification Services, and OracleEnterprise Manager agents (for maintenance purposes).

Oracle also provides the Oracle Clusterware Control (CRSCTL) utility. CRSCTL is an interface between you and Oracle Clusterware, parsing andcalling Oracle Clusterware APIs for Oracle Clusterware objects.

Oracle Clusterware 11g release 2 (11.2) introduces cluster-aware commands with which you can perform check, start, and stop operations onthe cluster. You can run these commands from any node in the cluster on another node in the cluster, or on all nodes in the cluster, depending

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

17 of 26 3/6/2012 9:55 AM

Page 75: 11gR2 RAC Openfiler Install Page1 2 3

on the operation.

You can use CRSCTL commands to perform several operations on Oracle Clusterware, such as:

Starting and stopping Oracle Clusterware resourcesEnabling and disabling Oracle Clusterware daemonsChecking the health of the clusterManaging resources that represent third-party applicationsIntegrating Intelligent Platform Management Interface (IPMI) with Oracle Clusterware to provide failure isolation support and to ensurecluster integrityDebugging Oracle Clusterware components

For the purpose of this article (and this section), we will only make use of the "Checking the health of the cluster" operation which uses theClusterized (Cluster Aware) Command:

crsctl check cluster

Many subprograms and commands were deprecated in Oracle Clusterware 11g release 2 (11.2):

crs_stat

crs_register

crs_unregister

crs_start

crs_stop

crs_getperm

crs_profile

crs_relocate

crs_setperm

crsctl check crsd

crsctl check cssd

crsctl check evmd

crsctl debug log

crsctl set css votedisk

crsctl start resources

crsctl stop resources

Check the Health of the Cluster - (Clusterized Command)

Run as the grid user.

[grid@racnode1 ~]$

crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

All Oracle Instances - (Database Status)

[oracle@racnode1 ~]$

srvctl status database -d racdb

Instance racdb1 is running on node racnode1

Instance racdb2 is running on node racnode2

Single Oracle Instance - (Status of Specific Instance)

[oracle@racnode1 ~]$

srvctl status instance -d racdb -i racdb1

Instance racdb1 is running on node racnode1

Node Applications - (Status)

[oracle@racnode1 ~]$

srvctl status nodeapps

VIP racnode1-vip is enabled

VIP racnode1-vip is running on node: racnode1

VIP racnode2-vip is enabled

VIP racnode2-vip is running on node: racnode2

Network is enabled

Network is running on node: racnode1

Network is running on node: racnode2

GSD is disabled

GSD is not running on node: racnode1

GSD is not running on node: racnode2

ONS is enabled

ONS daemon is running on node: racnode1

ONS daemon is running on node: racnode2

eONS is enabled

eONS daemon is running on node: racnode1

eONS daemon is running on node: racnode2

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

18 of 26 3/6/2012 9:55 AM

Page 76: 11gR2 RAC Openfiler Install Page1 2 3

Node Applications - (Configuration)

[oracle@racnode1 ~]$

srvctl config nodeapps

VIP exists.:racnode1

VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0

VIP exists.:racnode2

VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, listening port 2016

List all Configured Databases

[oracle@racnode1 ~]$

srvctl config database

racdb

Database - (Configuration)

[oracle@racnode1 ~]$

srvctl config database -d racdb -a

Database unique name: racdb

Database name: racdb

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: +RACDB_DATA/racdb/spfileracdb.ora

Domain: idevelopment.info

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: racdb

Database instances: racdb1,racdb2

Disk Groups: RACDB_DATA,FRA

Services:

Database is enabled

Database is administrator managed

ASM - (Status)

[oracle@racnode1 ~]$

srvctl status asm

ASM is running on racnode1,racnode2

ASM - (Configuration)

$

srvctl config asm -a

ASM home: /u01/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

TNS listener - (Status)

[oracle@racnode1 ~]$

srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): racnode1,racnode2

TNS listener - (Configuration)

[oracle@racnode1 ~]$

srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home: <crs>

/u01/app/11.2.0/grid on node(s) racnode2,racnode1

End points: TCP:1521

SCAN - (Status)

[oracle@racnode1 ~]$

srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node racnode1

SCAN - (Configuration)

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

19 of 26 3/6/2012 9:55 AM

Page 77: 11gR2 RAC Openfiler Install Page1 2 3

[oracle@racnode1 ~]$

srvctl config scan

SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /racnode-cluster-scan/192.168.1.187

VIP - (Status of Specific Node)

[oracle@racnode1 ~]$

srvctl status vip -n racnode1

VIP racnode1-vip is enabled

VIP racnode1-vip is running on node: racnode1

[oracle@racnode1 ~]$

srvctl status vip -n racnode2

VIP racnode2-vip is enabled

VIP racnode2-vip is running on node: racnode2

VIP - (Configuration of Specific Node)

[oracle@racnode1 ~]$

srvctl config vip -n racnode1

VIP exists.:racnode1

VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0

[oracle@racnode1 ~]$

srvctl config vip -n racnode2

VIP exists.:racnode2

VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

Configuration for Node Applications - (VIP, GSD, ONS, Listener)

[oracle@racnode1 ~]$

srvctl config nodeapps -a -g -s -l

-l option has been deprecated and will be ignored.

VIP exists.:racnode1

VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0

VIP exists.:racnode2

VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

Name: LISTENER

Network: 1, Owner: grid

Home: <crs>

/u01/app/11.2.0/grid on node(s) racnode2,racnode1

End points: TCP:1521

Verifying Clock Synchronization across the Cluster Nodes

[oracle@racnode1 ~]$

cluvfy comp clocksync -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

Node Name Status

------------------------------------ ------------------------

racnode1

passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

Check CTSS state started...

Check: CTSS state

Node Name State

------------------------------------ ------------------------

racnode1 Active

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

Reference Time Offset Limit: 1000.0 msecs

Check: Reference Time Offset

Node Name Time Offset Status

------------ ------------------------ ------------------------

racnode1 0.0

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

20 of 26 3/6/2012 9:55 AM

Page 78: 11gR2 RAC Openfiler Install Page1 2 3

passed

Time offset is within the specified limits on the following set of nodes:

"[racnode1]"

Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

All running instances in the cluster - (SQL)

SELECT

inst_id

, instance_number inst_no

, instance_name inst_name

, parallel

, status

, database_status db_status

, active_state state

, host_name host

FROM gv$instance

ORDER BY inst_id;

INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST

-------- -------- ---------- --- ------- ------------ --------- -------

1 1 racdb1 YES OPEN ACTIVE NORMAL racnode1

2 2 racdb2 YES OPEN ACTIVE NORMAL racnode2

All database files and the ASM disk group they reside in - (SQL)

select name from v$datafile

union

select member from v$logfile

union

select name from v$controlfile

union

select name from v$tempfile;

NAME

-------------------------------------------

+FRA/racdb/controlfile/current.256.703530389

+FRA/racdb/onlinelog/group_1.257.703530391

+FRA/racdb/onlinelog/group_2.258.703530393

+FRA/racdb/onlinelog/group_3.259.703533497

+FRA/racdb/onlinelog/group_4.260.703533499

+RACDB_DATA/racdb/controlfile/current.256.703530389

+RACDB_DATA/racdb/datafile/example.263.703530435

+RACDB_DATA/racdb/datafile/indx.270.703542993

+RACDB_DATA/racdb/datafile/sysaux.260.703530411

+RACDB_DATA/racdb/datafile/system.259.703530397

+RACDB_DATA/racdb/datafile/undotbs1.261.703530423

+RACDB_DATA/racdb/datafile/undotbs2.264.703530441

+RACDB_DATA/racdb/datafile/users.265.703530447

+RACDB_DATA/racdb/datafile/users.269.703542943

+RACDB_DATA/racdb/onlinelog/group_1.257.703530391

+RACDB_DATA/racdb/onlinelog/group_2.258.703530393

+RACDB_DATA/racdb/onlinelog/group_3.266.703533497

+RACDB_DATA/racdb/onlinelog/group_4.267.703533499

+RACDB_DATA/racdb/tempfile/temp.262.703530429

19 rows selected.

ASM Disk Volumes - (SQL)

SELECT path

FROM v$asm_disk;

PATH

----------------------------------

ORCL:CRSVOL1

ORCL:DATAVOL1

ORCL:FRAVOL1

30. Starting / Stopping the Cluster

At this point, everything has been installed and configured for Oracle RAC 11g release 2. Oracle grid infrastructure was installed by the grid

user while the Oracle RAC software was installed by oracle. We also have a fully functional clustered database running named racdb.

After all of that hard work, you may ask, "OK, so how do I start and stop services?". If you have followed the instructions in this guide, allservices — including Oracle Clusterware, ASM , network, SCAN, VIP, the Oracle Database, and so on — should start automatically on eachreboot of the Linux nodes.

There are times, however, when you might want to take down the Oracle services on a node for maintenance purposes and restart the Oracle

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

21 of 26 3/6/2012 9:55 AM

Page 79: 11gR2 RAC Openfiler Install Page1 2 3

Clusterware stack at a later time. Or you may find that Enterprise Manager is not running and need to start it. This section provides thecommands necessary to stop and start the Oracle Clusterware stack on a local server ( racnode1).

The following stop/start actions need to be performed as root.

Stopping the Oracle Clusterware Stack on the Local Server

Use the " crsctl stop cluster" command on racnode1 to stop the Oracle Clusterware stack:

[root@racnode1 ~]#

/u01/app/11.2.0/grid/bin/crsctl stop cluster

CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode1'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racnode1'

CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode1'

CRS-2673: Attempting to stop 'ora.racdb.db' on 'racnode1'

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode1'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'racnode1'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.racnode1.vip' on 'racnode1'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'racnode1'

CRS-2677: Stop of 'ora.scan1.vip' on 'racnode1' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'racnode2'

CRS-2677: Stop of 'ora.racnode1.vip' on 'racnode1' succeeded

CRS-2672: Attempting to start 'ora.racnode1.vip' on 'racnode2'

CRS-2677: Stop of 'ora.registry.acfs' on 'racnode1' succeeded

CRS-2676: Start of 'ora.racnode1.vip' on 'racnode2' succeeded

<-- Notice racnode1 VIP moved to racnode2

CRS-2676: Start of 'ora.scan1.vip' on 'racnode2' succeeded

<-- Notice SCAN moved to racnode2

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'racnode2'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'racnode2' succeeded

<-- Notice LISTENER_SCAN1 moved to racnode2

CRS-2677: Stop of 'ora.CRS.dg' on 'racnode1' succeeded

CRS-2677: Stop of 'ora.racdb.db' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode1'

CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode1'

CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode1' succeeded

CRS-2677: Stop of 'ora.FRA.dg' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'

CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'racnode1'

CRS-2673: Attempting to stop 'ora.eons' on 'racnode1'

CRS-2677: Stop of 'ora.ons' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'racnode1'

CRS-2677: Stop of 'ora.net1.network' on 'racnode1' succeeded

CRS-2677: Stop of 'ora.eons' on 'racnode1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has completed

CRS-2677: Stop of 'ora.crsd' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1'

CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1'

CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'racnode1' succeeded

CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded

CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1'

CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded

CRS-2673: Attempting to stop 'ora.diskmon' on 'racnode1'

CRS-2677: Stop of 'ora.diskmon' on 'racnode1' succeeded

Note: If any resources that Oracle Clusterware manages are still running after you run the " crsctl stop cluster" command, then the entire

command fails. Use the -f option to unconditionally stop all resources and stop the Oracle Clusterware stack.

Also note that you can stop the Oracle Clusterware stack on all servers in the cluster by specifying -all. The following will bring down the

Oracle Clusterware stack on both racnode1 and racnode2:

[root@racnode1 ~]#

/u01/app/11.2.0/grid/bin/crsctl stop cluster -all

Starting the Oracle Clusterware Stack on the Local Server

Use the " crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack:

[root@racnode1 ~]#

/u01/app/11.2.0/grid/bin/crsctl start cluster

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1'

CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'racnode1'

CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1'

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

22 of 26 3/6/2012 9:55 AM

Page 80: 11gR2 RAC Openfiler Install Page1 2 3

CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded

CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'

CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'racnode1'

CRS-2672: Attempting to start 'ora.asm' on 'racnode1'

CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded

CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'

CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

Note: You can choose to start the Oracle Clusterware stack on all servers in the cluster by specifying -all:

[root@racnode1 ~]#

/u01/app/11.2.0/grid/bin/crsctl start cluster -all

You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers separated by a space:

[root@racnode1 ~]#

/u01/app/11.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2

Start/Stop All Instances with SRVCTL

Finally, you can start/stop all instances and associated services using the following:

[oracle@racnode1 ~]$

srvctl stop database -d racdb

[oracle@racnode1 ~]$

srvctl start database -d racdb

31. Troubleshooting

Confirm the RAC Node Name is Not Listed in Loopback Address

Ensure that the node names ( racnode1 or racnode2) are not included for the loopback address in the /etc/hosts file. If the

machine name is listed in the in the loopback address entry as below:

127.0.0.1

racnode1

localhost.localdomain localhost

it will need to be removed as shown below:

127.0.0.1 localhost.localdomain localhost

If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

Openfiler - Logical Volumes Not Active on Boot

One issue that I have run into several times occurs when using a USB drive connected to the Openfiler server. When the

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

23 of 26 3/6/2012 9:55 AM

Page 81: 11gR2 RAC Openfiler Install Page1 2 3

Openfiler server is rebooted, the system is able to recognize the USB drive however, it is not able to load the logical volumes andwrites the following message to /var/log/messages - (also available through dmesg):

iSCSI Enterprise Target Software - version 0.4.14

iotype_init(91) register fileio

iotype_init(91) register blockio

iotype_init(91) register nullio

open_path(120) Can't open /dev/rac1/crs -2

fileio_attach(268) -2

open_path(120) Can't open /dev/rac1/asm1 -2

fileio_attach(268) -2

open_path(120) Can't open /dev/rac1/asm2 -2

fileio_attach(268) -2

open_path(120) Can't open /dev/rac1/asm3 -2

fileio_attach(268) -2

open_path(120) Can't open /dev/rac1/asm4 -2

fileio_attach(268) -2

Please note that I am not suggesting that this only occurs with USB drives connected to the Openfiler server. It may occur withother types of drives, however I have only seen it with USB drives!

If you do receive this error, you should first check the status of all logical volumes using the lvscan command from the Openfiler

server:

#

lvscan

inactive

'/dev/rac1/crs' [2.00 GB] inherit

inactive

'/dev/rac1/asm1' [115.94 GB] inherit

inactive

'/dev/rac1/asm2' [115.94 GB] inherit

inactive

'/dev/rac1/asm3' [115.94 GB] inherit

inactive

'/dev/rac1/asm4' [115.94 GB] inherit

Notice that the status for each of the logical volumes is set to inactive - (the status for each logical volume on a working system

would be set to ACTIVE).

I currently know of two methods to get Openfiler to automatically load the logical volumes on reboot, both of which are describedbelow.

Method 1

One of the first steps is to shutdown both of the Oracle RAC nodes in the cluster - ( racnode1 and racnode2). Then,

from the Openfiler server, manually set each of the logical volumes to ACTIVE for each consecutive reboot:

#

lvchange -a y /dev/rac1/crs

#

lvchange -a y /dev/rac1/asm1

#

lvchange -a y /dev/rac1/asm2

#

lvchange -a y /dev/rac1/asm3

#

lvchange -a y /dev/rac1/asm4

Another method to set the status to active for all logical volumes is to use the Volume Group change command asfollows:

#

vgscan

Reading all physical volumes. This may take a while...

Found volume group "rac1" using metadata type lvm2

#

vgchange -ay

5 logical volume(s) in volume group "rac1" now active

After setting each of the logical volumes to active, use the lvscan command again to verify the status:

#

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

24 of 26 3/6/2012 9:55 AM

Page 82: 11gR2 RAC Openfiler Install Page1 2 3

lvscan

ACTIVE

'/dev/rac1/crs' [2.00 GB] inherit

ACTIVE

'/dev/rac1/asm1' [115.94 GB] inherit

ACTIVE

'/dev/rac1/asm2' [115.94 GB] inherit

ACTIVE

'/dev/rac1/asm3' [115.94 GB] inherit

ACTIVE

'/dev/rac1/asm4' [115.94 GB] inherit

As a final test, reboot the Openfiler server to ensure each of the logical volumes will be set to ACTIVE after the bootprocess. After you have verified that each of the logical volumes will be active on boot, check that the iSCSI targetservice is running:

#

service iscsi-target status

ietd (pid 2668) is running...

Finally, restart each of the Oracle RAC nodes in the cluster - ( racnode1 and racnode2).

Method 2

This method was kindly provided by Martin Jones. His workaround includes amending the /etc/rc.sysinit script

to basically wait for the USB disk ( /dev/sda in my example) to be detected. After making the changes to the

/etc/rc.sysinit script (described below), verify the external drives are powered on and then reboot the

Openfiler server.

The following is a small portion of the /etc/rc.sysinit script on the Openfiler server with the changes

(highlighted in blue) proposed by Martin:

..............................................................

# LVM2 initialization, take 2

if [ -c /dev/mapper/control ]; then

if [ -x /sbin/multipath.static ]; then

modprobe dm-multipath >/dev/null 2>&1

/sbin/multipath.static -v 0

if [ -x /sbin/kpartx ]; then

/sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a"

fi

fi

if [ -x /sbin/dmraid ]; then

modprobe dm-mirror > /dev/null 2>&1

/sbin/dmraid -i -a y

fi

#-----

#----- MJONES - Customisation Start

#-----

# Check if /dev/sda is ready

while [ ! -e /dev/sda ]

do

echo "Device /dev/sda for first USB Drive is not yet ready."

echo "Waiting..."

sleep 5

done

echo "INFO - Device /dev/sda for first USB Drive is ready."

#-----

#----- MJONES - Customisation END

#-----

if [ -x /sbin/lvm.static ]; then

if /sbin/lvm.static vgscan > /dev/null 2>&1 ; then

action $"Setting up Logical Volume

Management:" /sbin/lvm.static vgscan --mknodes --ignorelockingfailure &&

/sbin/lvm.static vgchange -a y --ignorelockingfailure

fi

fi

fi

# Clean up SELinux labels

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

25 of 26 3/6/2012 9:55 AM

Page 83: 11gR2 RAC Openfiler Install Page1 2 3

if [ -n "$SELINUX" ]; then

for file in /etc/mtab /etc/ld.so.cache ; do

[ -r $file ] && restorecon $file >/dev/null 2>&1

done

fi

..............................................................

Finally, restart each of the Oracle RAC nodes in the cluster - ( racnode1 and racnode2).

32. Conclusion

Oracle11g RAC allows the DBA to configure a database solution with superior fault tolerance and load balancing. For those DBA's, however, thatwant to become more familiar with the features and benefits of Oracle11g RAC will find the costs of configuring even a small RAC clustercosting in the range of US$15,000 to US$20,000.

This article has hopefully given you an economical solution to setting up and configuring an inexpensive Oracle 11g release 2 RAC Clusterusing Oracle Enterprise Linux and iSCSI technology. The RAC solution presented in this article can be put together for around US$2,700 and willprovide the DBA with a fully functional Oracle 11g release 2 RAC cluster. While the hardware used for this article should be stable enough foreducational purposes, it should never be considered for a production environment.

33. Acknowledgements

An article of this magnitude and complexity is generally not the work of one person alone. Although I was able to author and successfullydemonstrate the validity of the components that make up this configuration, there are several other individuals that deserve credit in makingthis article a success.

First, I would like to thank Bane Radulovic from the Server BDE Team at Oracle. Bane not only introduced me to Openfiler, but shared with mehis experience and knowledge of the product and how to best utilize it for Oracle RAC. His research and hard work made the task of configuringOpenfiler seamless. Bane was also involved with hardware recommendations and testing.

A special thanks to K Gopalakrishnan for his assistance in delivering the Oracle RAC 11g Overview section of this article. In this section, muchof the content regarding the history of Oracle RAC can be found in his very popular book Oracle Database 10g Real Application ClustersHandbook . This book comes highly recommended for both DBA's and Developers wanting to successfully implement Oracle RAC and fullyunderstand how many of the advanced services like Cache Fusion and Global Resource Directory operate.

Lastly, I would like to express my appreciation to the following vendors for generously supplying the hardware for this article; Seagate,Avocent Corporation, and Intel.

Jeffrey M. Hunter [ www.idevelopment.info] is an Oracle Certified Professional, Java Development Certified Professional, Author, and an OracleACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania.

His work includes advanced performance tuning, Java and PL/SQL programming, capacity planning, database security, and physical / logicaldatabase design in a UNIX, Linux, and Windows server environment. Jeff's other interests include mathematical encryption theory,programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and ofcourse Linux.

Jeff has been a Sr. Database Administrator and Software Engineer for over 16 years and maintains his own website site at:http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in ComputerScience.

Page 1 Page 2 Page 3

Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and... http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-3-088...

26 of 26 3/6/2012 9:55 AM