Page 1
. . . . . . . .
© Copyright IBM Corporation, 2014. All Rights Reserved. All trademarks or registered trademarks mentioned herein are the property of their respective holders
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power
Systems with AIX 7.1
Ravisankar Shanmugam IBM Oracle International Competency Center
August 2014
Page 2
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
Table of contents Abstract ........................................................................................................................................ 1
Prerequisites ............................................................................................................................... 1
Introduction ................................................................................................................................. 1 Oracle Database 12c Release 1 and features ......................................................................................... 1
High Availability ................................................................................................................. 2 Performance ...................................................................................................................... 2 Security .............................................................................................................................. 3 Manageability .................................................................................................................... 3
About Oracle Real Application Clusters 12c Release 1 .......................................................................... 4 About IBM Power Systems ...................................................................................................................... 5 About IBM System Storage DS5300 ....................................................................................................... 6
Hardware requirements .............................................................................................................. 8 Oracle Real Application Clusters requirements ....................................................................................... 8
Server CPU ....................................................................................................................... 8 Server memory .................................................................................................................. 9 Network ............................................................................................................................. 9 Shared storage .................................................................................................................. 9
High availability considerations .............................................................................................................. 11
Software requirements ............................................................................................................. 12 Operating system ................................................................................................................................... 12 Storage System Manager ...................................................................................................................... 12 Subsystem Device Driver Path Control Module (SDDPCM) ................................................................. 13 Oracle Database 12c Release 1 ............................................................................................................ 13 Automatic Storage Management ........................................................................................................... 13
Configuring the system environment ..................................................................................... 14 Virtual IO Server (VIOS) and Logical Partitions (LPARs) ...................................................................... 14 Hardware Management Console (HMC) ............................................................................................... 15 Installing the AIX operating system ....................................................................................................... 16
Installing Oracle Grid Infrastructure 12.1.0.1 ......................................................................... 18 Pre-Installation tasks .............................................................................................................................. 18
Checking security resource limit ...................................................................................... 18 Configuring OS kernel parameters .................................................................................. 19 Configuring network parameters ..................................................................................... 20 Network time protocol ...................................................................................................... 21 Creating users and groups .............................................................................................. 21 Setting Oracle inventory location ..................................................................................... 22 Setting up network files ................................................................................................... 22 Configuring SSH on all cluster nodes.............................................................................. 24 Configuring shared disks for OCR, voting and database ................................................ 24 Running Cluster Verification Utility (CVU) ....................................................................... 25
Performing Oracle Clusterware installation and Automatic Storage Management installation ............. 26
Page 3
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
Performing post-installation tasks .......................................................................................... 39
Installing Oracle Database 12c Release 1 (12.1.0.1) .............................................................. 41 Pre-Installation tasks .............................................................................................................................. 41
Running Cluster Verification Utility .................................................................................. 41 Preparing Oracle home and its path ................................................................................ 42
Performing database installation ........................................................................................................... 42 Creating Oracle Real Application Cluster Database – Container database .......................................... 52 Creating Oracle Real Application Cluster Database – Pluggable database .......................................... 63 Post installation tasks ............................................................................................................................ 67
Connecting to the pluggable database ............................................................................ 67 Monitoring and Managing database using Enterprise Manager Database Express ............................. 70
Summary .................................................................................................................................... 72
References ................................................................................................................................. 73 Oracle documentation ............................................................................................................................ 73 IBM documentation ................................................................................................................................ 73 IBM and Oracle Web sites ..................................................................................................................... 73
About the author ....................................................................................................................... 74
Appendix A: List of common abbreviations and acronyms .................................................. 75
Trademarks and special notices .............................................................................................. 76
Page 4
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
1
Abstract The purpose of this white paper is to assist for those who are installing newly introduced Oracle Database 12c Release 1 with Oracle Real Application Clusters (RAC) on IBM Power Systems™ servers with AIX® 7.1. The information provided herein is based on experiences with test environments at the IBM lab and is based on available documentation from IBM and Oracle corp. web sites.
This paper does not cover the installation of AIX, the Virtual I/O Server (VIOS) or the IBM Systems Storage™ Management Software used to configure the IBM System Storage server used in our tests.
Prerequisites Good knowledge of Oracle Database
Knowledge of the AIX, Virtual IO Server and IBM Systems Storage.
Introduction This white paper will discuss the necessary steps to prepare the AIX nodes with shared disks for installing
Oracle Grid Infrastructure and Oracle Database 12c Release 1 with RAC.
An implementation of Oracle Real Application Clusters consists of three main steps:
1. Planning the hardware for Oracle Real Application Clusters implementation
2. Configuring the servers and storage disk systems
3. Installing and configuring Oracle Grid Infrastructure 12c and Oracle Database 12c Release 1 with
RAC
Oracle Database 12c Release 1 and features
Oracle Database 12c is a next generation database introduced in year 2013 which includes many new
features over its previous database versions. The letter “c” in the word 12c stands for “Cloud”. Oracle
Database 12c is based on multitenant architecture that simplifies the process of consolidating databases
into a private cloud model. Oracle Database 12c allows each database plugged into multitenant
architecture to have a look and feel as a standard database to the applications.
Oracle Multitenant is a new architecture that allows a container database (CDB) to hold zero, one or
many pluggable databases (PDB). Every container database has a root container which has a collection
of schemas, schema objects, non-schema objects to which all pluggable databases belong. The root
container stores system metadata required to manage pluggable databases.
The pluggable database is a user created set of schemas, objects and related structures that appears
logically to the applications as a separate database. Every pluggable database in a container database is
owned by “SYS” user regardless of which user created the pluggable database. The PDB is as a self
Page 5
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
2
contained unit which stores application data and it can be moved and plugged to another container
database. The PDBs must be uniquely named along with its own unique service name.
For more information on Oracle Database 12c , refer the web link “Oracle 12c Database concept guide” .
There are many new features available with Oracle Database 12c Release 1 over the previous release of
Oracle database. They can be found in Oracle 12c Release 1 documentation available on the Oracle web
site Oracle Database New Features Guide 12c Release 1, some of the main highlights are as follows:
High Availability
Many enhancement and new features are introduced in Oracle Database 12c for High Availability.
The following are some of the features,
Masks unplanned/Planned outages when successful, replays in-flight work on recoverable errors.
Transaction Guard that returns the outcome of the last in-flight transaction after an outage, so that the
application or user would know the exact status of the transaction. This avoids logical data corruption
by duplicate transactions.
Application Continuity – used for making applications continue when the outages that result in
recoverable errors. This improves the user experience when handling both planned and unplanned
outage situation. Application Continuity is available for Oracle JDBC-Thin driver, Oracle Universal
Connection Pool and oracle WebLogic Server.
Global Data Services (GDS) is a new scalability and availability feature which offers better
performance, scalability and availability of application workloads running on replicated databases.
Data Guard enhancement – Active Data Guard Far Sync, which enables zero data loss protection to
the database when the replication site is located far away from the primary location.
Other enhancements with Data Guards are Active Data Guard Real Time Cascading, Data Guard
Fast Sync, rolling upgrade and much more.
RMAN enhancement on Fine-Grained Table Recovery from Backup
Cross Platform Backup and Restore.
Backup and recover a specific pluggable database
Flex ASM - eliminates one to one dedicated mapping of Oracle database instance with the ASM
instance. The Flex ASM protects the Oracle instance from the failure when an ASM instance fails and
let the Oracle 12c database instance that were relying on that ASM instance to reconnect to another
surviving ASM instance running on a different server.
Performance
Advanced network compression allows the compression of data transition over Oracle Net Services
between client and server. This compression technique reduces network latency in local are network
(LAN) and wide area network (WAN) environment and increase the performance.
Very Large Network Buffers – allowing larger packets in the Oracle Net layer and increases
application throughput and utilizes the available high network bandwidth.
Asynchronous I/O control for Direct NFS client – this controls the number of asynchronous I/O
requests that can be queued by an Oracle process when Direct NFS Client is enabled. This can be
tuned based on the NFS server’s capacity handling the NFS client’s I/O requests, otherwise too many
I/O requests could bound the NFS server to hang.
Page 6
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
3
Tracking I/O outliers – Oracle Database maintains a v$IO_OUTLIER view that shows I/O operations
that take a long time to complete. The DBAs can monitor this view to watch the I/O latencies and take
action to the I/O components and operating system behavior.
Multi-Process Multi-Thread – provides a new execution model and efficiently utilizing the system and
processor resources
Support of IPv6 based IP addresses for RAC client connectivity – Cluster nodes can be configured to
use either IPv4 or IPv6 based IP addresses for the Virtual IPs on the public network while more than
one public network can be defined for the cluster.
Security
New Data Encryption, Hashing and Redaction – This prevents some vital sensitive data columns from
being displayed.
Auditing of DB enabled by default. The features in auditing enables audit policies to be created and
enabled in the database without database downtime.
Code-Based security – this enables roles to be associated with PL/SQL packages, functions and
procedures.
Separation of duty of database administration – the database provides new roles for different
database administration activities such as backup and recovery, high availability and key
management. This avoids to give a powerful SYSDBA privilege to users who are doing common day-
to-day operation. The new SYSBACKUP privilege allows RMAN users to connect to the target
database without requiring SYSDBA privilege.
There are lot more enhancement in the database security area such as Encryption key management,
protecting the database server from outside using restricted service registration for Oracle RAC and
much more. For more information for the security related features and enhancements, refer Oracle
Database 12c new features guide.
Manageability
Oracle Enterprise Manager Database Express – this is embedded inside the database and is uto-
configured at the time of installation. It uses only 20MB is disk space and not consuming any system
resources when it is not invoked. It can manage both single instance and RAC databases.
Automatic Workload Repository (AWR) – this is enhanced to include reports from real Real-Time SQL
Monitoring, Real-Time Automatic Database Diagnostics Monitor (ADDM) and Database operations
monitoring. The Real-Time monitoring works at CDB and PDB level.
Oracle Enterprise Manager Database Express includes Performance Hub, which provides a single
pane of glass view of database performance with access of ADDM, SQL Tuning, Real-Time SQL
Monitoring and Active Session History (ASH) analytics under the same hood.
Database Replay and stress testing – Database Replay captures the actual production workloads
and provides the ability to create exact same workload run on a test environment to simulate the
production workload for performance tuning and running the test on a new environment.
The new Enterprise Manager GUI can monitor and manage the full lifecycle of Oracle Clusterware
resources. It also introduces procedures to scale up or scale down Oracle Clusterware and Oracle
Real Application Clusters easily.
Complete deinstallation and deconfiguration of Oracle RAC databases and listeners can be done by
Database Configuration Assistant (DBCA), Database Upgrade Assistant (DBUA), and Net
Configuration Assistant (NETCA).
Page 7
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
4
Oracle Universal Installer can help to clean up a failed Oracle Clusterware Installation by advising you
the places to clean and steps to change prior to reattempting the installation again. During the
installation, it also consists of several recovery points for you to retry and rollback to the closet
recovery point once the problem has been fixed.
Database administrator can limit Oracle instance’s CPU usage by setting the CPU_COUNT
initialization parameter. This is called Instance Caging.
E-mail notifications can be sent to users on any job activities.
For more manageability features, refer an Oracle white paper “Manageability with Oracle Database
12c”.
About Oracle Real Application Clusters 12c Release 1
Oracle Real Application Clusters (RAC) is an option of Oracle Database that allows a database to be
installed across multiple servers (RAC nodes). The RAC uses the shared disk method of clustering
databases. Oracle processes running in each node access the same data residing on shared data disk
storage. Oracle RAC uses a “shared everything” data architecture. This means that all data storage
needs to be globally available to all RAC nodes. First introduced with Oracle Database 9i, RAC provides
high availability and flexible scalability. If one of the clustered nodes fails, Oracle continues processing
on the other nodes. If additional capacity is needed, nodes can be dynamically added without taking
down the cluster.
An Oracle RAC database requires Oracle Clusterware to be installed on the nodes prior to installing
Oracle RAC database binaries. Oracle Grid Infrastructure which includes Oracle Clusterware, Oracle
ASM/ Oracle Cloud File System (CloudFS) and the Oracle Database with RAC constitute the Oracle RAC
Stack.
Based on the deployment requirement, the Oracle Clusterware 12c can be installed in one of the three
ways.
a) Standard Cluster (Configured like pre Oracle 12c version)
b) Oracle Flex Cluster (this one combines traditional, tightly and loosely coupled nodes in a
single cluster)
c) Application Cluster (used for non-database applications)
Oracle ASM 12c introduces a new feature called Oracle Flex ASM. It increases the database instance
availability and reduces the Oracle ASM related resource consumption. Some of the additional features of
the Flex ASM are as follows,
The maximum number of ASM Disk Group is increased to 511.
Oracle Flex ASM supports larger LUN sizes for Oracle Database 12c clients (increased to
32PB).
The ASM Disk in an ASM Disk Group can be renamed.
Oracle RAC supports newly introduced Oracle Multitenant option. Oracle Multitenant option is used for
consolidation of databases, provisioning, easy upgrades of software, and more. This is a new architecture
which allows multitenant container database to hold many pluggable databases.
Page 8
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
5
In general, there are a lot of benefits of using Oracle RAC, particularly for business continuity, high
availability, scalability, agility, workload management, standardized deployment and system
management.
For more information on Oracle RAC features, refer “Oracle Database new features guide 12c Release 1
(12.1)”
About IBM Power Systems
For testing the installation of Oracle Database 12c RAC on AIX 7.1, IBM Power® 750 Express server was
used.
The Power 750 Express server (8233-E8B) supports up to four 6-core 3.3 GHz or four 8-core 3.0 GHz,
3.3 GHz, and 3.55 GHz POWER7 processor cards in a rack-mount drawer configuration. The POWER7
processors in this server are 64-bit, 6-core and 8-core modules that are packaged on dedicated
processor cards with 4 MB of L3 cache per core and 256 KB of L2 cache per core. The Power 750
Express server supports a maximum of 32 DDR3 DIMM slots, eight DIMM slots per processor card.
Memory features (two memory DIMMs per feature) supported are 8 GB, 16 GB, and 32 GB, and run at
speeds of 1066 MHz. A system with four processor cards installed has a maximum memory capacity of
512 GB. The Power 750 Express server provides great I/O expandability. For example, with 12X-attached
I/O drawers, the system can have up to 50 PCI-X slots or up to 41 PCIe slots. This combination can
provide over 100 LAN ports or up to 576 disk drives (over 240 TB of disk storage). Extensive quantities of
externally attached storage and tape drives and libraries can also be attached.
The Power 750 Express system unit without I/O drawers can contain a maximum of either eight small
form factor (SFF) SAS disks or eight SFF SAS solid state drives (SSDs), providing up to 2.4 TB of disk
storage. All disks and SSDs are direct dock and hot pluggable. The eight SAS bays can be split into two
sets of four bays for additional AIX or Linux® configuration flexibility.
The system unit also contains a slim line DVD-RAM, plus a half-high media bay for an optional tape drive
or removable disk drive. Also available in the Power 750 Express system unit is a choice of quad-gigabit
or dual-10 Gb integrated host Ethernet adapters. These native ports can be selected at the time of initial
order. Virtualization of these integrated Ethernet adapters is supported.
As with the POWER7™ processor, simultaneous multithreading enabling up to four threads to be
executed at the same time on a single processor core is a standard feature of POWER7 technology. The
POWER7 processor also includes VSX (Vector Scalar Extension) accelerator, which helps to improve the
performance of high performance computing (HPC) workloads.
All Power Systems servers can utilize logical partitioning (LPAR) technology implemented using System p
virtualization technologies, the operating system (OS), and a hardware management console (HMC).
Dynamic LPAR allows clients to dynamically allocate many system resources to application partitions
without rebooting, allowing many dedicated processor partitions on a fully configured system. In addition
to the base virtualization that is standard on every System p server, two optional virtualization features
are available on the server: PowerVM™ Standard Edition (formerly Advanced POWER Virtualization
(APV) Standard) and PowerVM Enterprise Edition (formerly APV Enterprise).
PowerVM Standard Edition includes IBM Micro-Partitioning® and Virtual I/O Server (VIOS) capabilities.
Micro-partitions can be defined as small as 1/10th of a processor and be changed in increments as small
Page 9
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
6
as 1/100th of a processor. VIOS allows for the sharing of disk and optical devices and communications
and Fibre Channel adapters. Also included is support for Multiple Shared Processor Pools and Shared
Dedicated Capacity. PowerVM Enterprise Edition includes all features of PowerVM Standard Edition plus
Live Partition Mobility. It is designed to allow a partition to be relocated from one server to another while
end users are using applications running in the partition.
Active Memory Expansion enablement is an optional feature of POWER7 processor-based servers.
Active Memory Expansion is an innovative POWER7 technology that allows the effective maximum
memory capacity to be much larger than the true physical memory maximum. Compression and
decompression of memory content can allow memory expansion up to 100%. This can allow a partition to
do significantly more work or support more users with the same physical amount of memory. Similarly, it
can allow a server to run more partitions and do more work for the same physical amount of memory.
The IBM POWER7 based server supports N_Port ID Virtualization (NPIV). The NPIV allows multiple
logical partitions to access independent physical storage through the same physical Fibre Channel
adapter. This adapter is attached to a Virtual I/O Server partition, which acts only as a pass-through
managing the data transfer through the POWER Hypervisor.
The POWER Hypervisor provides virtual I/O adapters such as virtual SCSI, virtual Ethernet, virtual Fibre
Channel and virtual console.
Figure 1: IBM Power System p750 Express one Module
About IBM System Storage DS5300
For testing the installation of Oracle Database 12c RAC on AIX 7.1, IBM Systems Storage DS5300 was
used to provide SAN Storage for both Oracle Grid Infrastructure and Oracle RAC database.
The IBM System Storage DS5300 is a DS5000 series of Storage server, flexible, high-performance
storage for medium and large enterprises. It is an innovative storage system designed to provide high
availability and high performance in a small, space-saving, power-efficient modular package.
It provides balanced performance up to 700,000 input/output operations per second (IOPS) and 6,400
MB/s that is well-suited for virtualization and consolidation. Scale up to 448 drives (1.34 PB) using the
EXP5000 enclosure and up to 480 drives (1.44 PB) of high-density storage with the EXP5060 enclosure.
Page 10
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
7
Allow for inter mixing drive types (Fibre Channel, FC-SAS, FC-SAS nearline, self-encrypting drives
(SEDs), SATA and solid-state drives (SSDs)) and host interfaces (Fibre Channel and iSCSI) for
investment protection and cost-effective tiered storage.
IBM DS5000 series support high availability with hot-swappable components and non-disruptive firmware
upgrades. DS5000 storage systems are equally adept at supporting transactional applications such as
databases and online transaction processing (OLTP), throughput-intensive applications such as high-
performance computing (HPC) and rich media, and concurrent workloads for consolidation and
virtualization. With relentless performance and superior reliability and availability, DS5000 series storage
systems can support the most demanding service level agreements (SLAs) for the most common
operating systems, including Microsoft Windows, UNIX, Linux and Apple Macintosh. When requirements
change, you can add or replace host interfaces, grow capacity, add cache and reconfigure the system on
the fly—ensuring that it will keep pace with your growing organization.
IBM Systems Storage DS5000 series has dual controllers, Field-replaceable host interface cards (HIC),
two per controller. Current release supports four 8 Gbps Fibre Channel HICs or one 10 Gbps iSCSI—dual
ported (sixteen total host ports). Sixteen 4 Gbps Fibre Channel drive interfaces support up to 28
EXP5000 drive enclosures or 8 EXP5060 drive enclosures. It supports 448 Fibre Channel, FC-
SAS/SATA, FC-SAS nearline, SSD or SED drives if using twenty-eight EXP5000 drive enclosures or 480
SATA drives if using eight EXP5060 drive enclosures .Up to a total of 64 GB cache, other options include
4 GB, 8 GB, 16 GB or 32 GB of cache per controller. It has dedicated cache mirroring channels and
persistent cache backup in the event of a power outage.
IBM Systems Storage DS5000 supports RAID 6, 5, 3, 10, 1, 0.
For more information on IBM Systems Storage DS5000 series, go to www.ibm.com/
systems/storage/disk/ds5000/index.html
Figure 2: IBM Systems Storage DS5300
Page 11
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
8
The DS5300 can help simplify IT infrastructure by supporting a wide range of servers, both mainframe
and open systems, including IBM Power Systems, System x®, System z®, and non-IBM platforms
running UNIX®, Linux®, and Windows® operating systems.
Hardware requirements
Oracle Real Application Clusters requirements
An Oracle Real Application Clusters Database environment consists of the following components:
Cluster nodes - 2 to n nodes or hosts, running Oracle Database server(s)
Network interconnect - a private network used for cluster communications and Cache Fusion
Shared storage - used to hold database’s system and data files and accessed by the cluster nodes
Production network - used by clients and application servers to access the database
Figure 3 below is the high level architecture diagram for Oracle Real Application Clusters:
Production Network
Application Servers Users
High speed interconnect
Storage Area Network
SAN Fabric
Shared storage
Shared cache with Oracle Cache Fusion
Figure 3: Oracle Real Application Clusters architecture
Server CPU
There should be enough server CPU capacity in terms of speed and number of CPU’s to handle the
workload. Generally speaking, there should be enough CPU capacity to have an average CPU
utilization of 65%. This will allow the server absorb peak activity more easily.
Page 12
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
9
Server memory
An Oracle Database may require a lot of memory that depends on the activity level of users and the
nature of the application or workload. As a rule of thumb, the server should have more memory than it
actually uses because performance will be greatly degraded, heavy disk swapping and node eviction
may occur when there is insufficient of memory.
It is important to select servers that are available with the amount of memory required plus room for
growth. Memory utilization should be around 75-85% maximum of the physical memory in production
environment. Otherwise, heavy disk swapping may occur and server performance will decrease.
Network
Servers in an Oracle Real Application Clusters need at least two separate networks, a public network
and a private network. The public network is used for the communication between the clients or
applications servers and the database. The private network, also referred to as “network interconnect”
is used for cluster node communication. It is used for monitoring the heartbeat of the cluster and by
Oracle Real Application Clusters for Cache Fusion. At least 2 physical or Logical or virtual Ethernet
adapters are needed on each of the RAC nodes, one for public network and another one of private
RAC interconnection.
IBM Power Systems POWER7 processor-based systems offers Integrated Virtual Ethernet adapter
(IVE), which gives integrated High-speed Ethernet Adapter ports (Host Ethernet Adapter (HEA)) with
hardware assisted virtualization capabilities. IVE also includes special hardware features that
provides the logical Ethernet adapter or otherwise called Logical Host Ethernet Adapter (LHEA).
These LHEA adapters can directly assigned to the LPARs without configuring through the POWER
Hypervisor (PHYP). This eliminates the need to move the pockets between Logical Partitions
(LPARs) through the Shared Ethernet Adapter (SEA). IVE replaces the need of virtual Ethernet and
SEA in Virtual IO server (VIOS) environment and LPARs can share the HEA ports with improved
performance.
InfiniBand networking for Oracle RAC interconnecting is supported from Oracle Database 11g on AIX.
Shared storage
Shared storage for Oracle Real Application Clusters devices can be logical drives or LUNs from a
Storage Area Network (SAN) controller or a Network File System (NFS) from a supported Network
Attached Storage (NAS) device. IBM sells NAS products such as IBM System Storage N3000,
N3700, N5000 and N7000.
For SAN products, IBM offers enterprise, mid-range and entry level disk systems. Check to ensure
the System Storage product you are using is supported with Oracle Real Application Clusters
implementations. Third party storage subsystem can also be used with AIX servers. Please refer to
third party documentation or contact a third party representative for product certification information.
To use a shared file system for Oracle Clusterware, Oracle ASM and Oracle RAC Database files, the
file system must comply with the following requirements:.
Page 13
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
10
1. A certified cluster file system is required.
This is a file system that will be accessed (read and write) by all members in a cluster at the same
time, with all cluster members having the same view of the file system. It allows all nodes in a cluster
to access a device concurrently via the standard file system interface. IBM General Parallel File
System (GPFS version 3.2.1.8 or later) is an example. GPFS can be used for placing shared Oracle
Home for Grid Infrastructure software files (Clusterware and ASM) and database software and
database files.
2. Oracle Automatic Storage Management (ASM)
ASM is a simplified database storage management and provisioning system that provides file system
and volume management capabilities in Oracle. It allows database administrators (DBA) to reference
disk groups instead of individual disks and files which ASM manages internally. ASM is installed as
part of Oracle Grid Infrastructure and designed to handle Oracle Cluster Registry (OCR), voting,
Database files, control files and log files.
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is introduced in Oracle
11g Release 2. It is a multi-platform, scalable file system which supports database and application
files like executables, database trace files, database alert logs, application reports, BFILEs, and
configuration files. However, it does not support to store Oracle Cluster Registry (OCR), Voting and
Oracle clusterware binaries.
In the lab test for Oracle Database 12c Release 1 with RAC on AIX, voting, OCR disks and database
files are created in ASM disk groups. The Oracle Grid Infrastructure software files and Oracle
Database software files are placed in the local file systems (JFS2).
The following table shows the storage options supported for OCR, Voting, Oracle DB binaries and
database files.
Storage Option OCR and Voting Files
Oracle Clusterware binaries
Oracle RAC binaries
Oracle Database Files
Oracle Recovery Files
Oracle Automatic Storage Management (Oracle ASM)
Note: Loopback devices are not supported for use with Oracle ASM
Yes No No Yes Yes
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
No No
Yes for running Oracle Database on Hub Nodes for Oracle Database 11g Release 2 (11.2) and later.
No for running Oracle Database on Leaf Nodes.
Yes (Oracle Database 12c Release 1 (12.1) and later)
Yes (Oracle Database 12c Release 1 (12.1) and later
Page 14
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
11
Storage Option OCR and Voting Files
Oracle Clusterware binaries
Oracle RAC binaries
Oracle Database Files
Oracle Recovery Files
Local file system No Yes Yes No No
IBM General Parallel File System (GPFS). Note: You cannot place ASM files on GPFS.
Oracle does not recommend the use of GPFS for voting disks if HACMP is used.
Yes Yes Yes Yes Yes
Network file system (NFS) on a certified network-attached storage (NAS) filer
Note: Requires a certified NAS device. Oracle does not recommend the use of NFS for voting disks if HACMP is used.
Yes
Yes
Yes
Yes
Yes
Shared disk partitions (block devices or raw devices) including raw logical volumes managed by HACMP
Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation
No No
Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.
No
High availability considerations
High availability (HA) is a key requirement for many clients. From a hardware configuration standpoint,
this means eliminating single point of failure. IBM products are designed for high availability, with such
standard features as redundant power supplies and cooling fans, hot-swappable components, and so on.
For high availability environments, one of the following suggestions (1 or 2 and 3) should also be taken
into consideration when selecting the server:
1. From Oracle Database 11g Release 2, Patch Set 1, Oracle introduced an “Integrated Redundant
Interconnect Usage” feature, which provides a highly available (HAIP) network functionality for
the Oracle interconnect. The Redundant Interconnect Usage feature does not operate on the
network interfaces directly. Instead, it is based on a multiple-listening-endpoint architecture, in
which a highly available virtual IP is assigned to each private network (up to four interfaces).
Page 15
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
12
Oracle RAC and Oracle ASM instances use these redundant interface addresses to ensure highly
available. Load-balanced interface communication between nodes.
2. Configure additional network interfaces and use AIX Etherchannel to combine at least two
network interfaces for each of the two Oracle RAC networks. This reduces the downtime due to a
network interface card (NIC) failure or network component failure. Multi-port adapters provide
network path redundancy, however the adapter will be a single point of failure. In this case,
redundant multi-port adapters are the best solution. In addition, NICs used for Etherchannel
should be on separate physical network cards and connected to different network switches.
3. There should be at least two fibre channel host bus adapters (HBA) on each node to provide
redundant I/O paths to the storage subsystem. Multi-port HBAs and Storage Area Network (SAN)
with redundant components like SAN switches and cabling will provide higher availability of the
servers.
Finally, an Oracle RAC implementation requires at least two network interfaces. Nevertheless, up to four
network interfaces are recommended, two for public, two for private. The more redundancy of hardware
architectures and software components, the less downtime databases and applications will experience.
Software requirements In an Oracle Real Application Clusters implementation, there are additional AIX file sets need to be
installed in the cluster nodes. A few of them are optional and may not require in the RAC nodes. If the
optional file sets are missing the Cluster Verification tool may show a failure report, which can be ignored.
Operating system
IBM AIX 7.1 (7100-02-03-1334) is the operating system used for the tests described in this paper.
Storage System Manager
The IBM System Storage DS Storage Manager software is used to configure, manage, and troubleshoot
the DS5000 storage subsystems. It is used primarily to configure RAID arrays and logical drives, assign
logical drives to hosts, replace and rebuild failed disk drives, expand the size of the arrays and logical
drives, and convert from one RAID level to another. It allows for troubleshooting and management tasks,
such as checking the status of the storage server components, updating the firmware of the RAID
controllers, and managing the storage server. Finally, it offers advanced functions, such as IBM
FlashCopy®, Volume Copy, Enhanced Remote Mirroring, and Disk Encryption.
The Storage Manager software is now packaged with many components for different use. Storage
Manager Client (SMclient) component provides the graphical user interface (GUI) for managing storage
systems through the Ethernet network or from the host computer. The command-line interface (SMcli) is
Page 16
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
13
also packaged with the SMclient and provides command-line access to perform all management
functions.
The IBM System Storage DS Storage Manager software is available for AIX, Linux, HP-UX and
Microsoft® Windows (32-bit and 64-bit version) operating systems.
Subsystem Device Driver Path Control Module (SDDPCM)
The SDDPCM is loadable path control module designed to support the multipath configuration
environment in the IBM System Storage Enterprise Storage Server, the IBM System Storage SAN
Volume Controller and the IBM System Storage DS family. When the supported devices are configured
as MPIO-capable devices, SDDPCM is loaded and becomes part of the AIX MPIO FCP (Fiber Channel
Protocol) device driver. The AIX MPIO device driver with the SDDPCM module enhances the data
availability and I/O load balancing.
SDDPCM manages the paths to provide High Availability and Load Balancing of storage I/O, automatic
path-failover protection, prevention of a single-point-failure caused by host Bus Adapter (HBA). Fibre
channel cables or host-interface adapters on supported storage.
To download SDDPCM driver for AIX and documentation, follow the link:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000201#DS6K
Oracle Database 12c Release 1
Oracle Database 12c Release 1 (12.1.0.1) is the current release of Oracle’s database product. For both
RAC and non-RAC installations AIX should be running in 64-bit kernel mode only. For the latest
information on Oracle product certifications, please visit My Oracle Support web site:
https://support.oracle.com/CSP/ui/flash.html
The Oracle Database software can be downloaded from the Oracle Technology Network (OTN) or the
DVDs can be requested from Oracle Support. Oracle RAC is a separately licensed option of Oracle
Enterprise and Standard Editions. For additional information on pricing, please refer to:
http://www.oracle.com/corporate/pricing/technology-price-list.pdf
Automatic Storage Management
Automatic Storage Management (ASM) provides volume and cluster file system management where the
I/O subsystem is directly handled by the Oracle kernel. Oracle ASM will have each LUN mapped as a
disk. Disks are then grouped together into disk groups. Each disk group can be segmented in one or
more fail groups. ASM automatically performs load balancing in parallel across all available disk drives to
prevent hot spots and maximize performance.
Starting with Oracle Database 11g Release 2, Oracle Clusterware OCR and voting disk files can be
stored in Oracle ASM disk group.
Page 17
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
14
From Oracle Database 11g Release 2 , ASM becomes a complete storage management solution for both
Oracle Database and non-database files and has many extended functions for not only storing database
files, but also storing binary files, report files, trace files, alert logs and other application data files.
ASM Cluster File Systems (ACFS) extends ASM by providing cluster file system scaled to a large number
of nodes and uses extend-based storage allocation for improved performance. ACFS can be can be
exported to remote clients through NFS and CIFS.
ASM Dynamic Volume Manager (DVM), ASM FS Snapshot, ASM Intelligent Data Placement, ASM
Storage Management Configuration Assistant (ASMCA), ASM File Access Control and ASMCMD are
some of the extended functions of ASM.
For more information on ASM new features , refer to the Oracle document “Oracle Database New
Features Guide 12c Release 1 (12.1)”
Configuring the system environment
Virtual IO Server (VIOS) and Logical Partitions (LPARs)
The VIOS is part of the IBM Power Systems server machine’s Advanced Power Virtualization hardware
feature. VIOS allows sharing of physical resources between LPARs including virtual SCSI, virtual
networking and virtual fibre channel adapters (N-Port ID Virtualization). This allows more efficient
utilization of physical resources through sharing between LPARs and facilitates server consolidation. This
allows a single machine to run multiple operating system (AIX or Linux on POWER) images at the same
time while each is isolated from the others.
VIOS itself a logical partition (LPAR) on an IBM Power System machine, which has the OS and command
line to manage hardware resources. VIOS is controlled by the Hardware Management Console (HMC)
that owns hardware adapters like SCSI disks, Fibre-Channel disks, Ethernet or CD/DVD optical devices
but allows other LPARs to access them or a part of them. This allows the device to be shared. The LPAR
with the resources is called the VIO Server and the other LPARs using it are called VIO Clients. For
example, instead of each LPAR (VIO client) having a physical SCSI or Fibre channel adapter and SCSI or
SAN disk to boot from they can share one disk or have a separate disk on the VIO Server. This reduces
costs by eliminating adapters, adapter slots and disks. This client - server access is implemented over
memory within the machine for speed. There are two ways the VIO Clients can access the local and SAN
disks. One way is using virtual SCSI method and another way is using PowerVM N-Port ID Virtualization
(NPIV). Using NPIV, the physical Fibre Channel adapter can be virtualized and shared among multiple
VIO clients. The NPIV provides the capability to assign a physical Fibre Channel adapter to multiple
unique world wide port names (WWPN). The VIO clients can access independent SAN storage through
the same physical Fibre Channel adapter by assigning the SAN LUN to the WWPN of the virtual Fibre
Channel adapter in SAN Storage server.
In the lab test for installing Oracle Database 12c Release 1 with RAC, three LPARs were used as RAC
nodes. . One virtual Ethernet adapter was used for public connectivity and two Gigabit Ethernet ports
were used for interconnection between LPARs.
Page 18
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
15
Each LPAR has 4 virtual Fibre Channel adapters and are connected to a switched SAN for storage from
the IBM System storage DS5300. The disks for installing AIX, Oracle Database RAC binaries and Oracle
Database on each LPAR were supplied directly from IBM System Storage DS5300.
The following diagram shows the setup of LPARs for the Oracle Database 12c Release 1 with RAC
environment.
Figure5: 3 Node Oracle RAC setup in the test lab.
Hardware Management Console (HMC)
The HMC is based on the IBM System x hardware architecture running dedicated applications to provide
partition management for one or more IBM Power Systems servers called managed systems. The HMC is
used for creating and maintaining a multiple partition (LPAR) environment. The HMC acts as a virtual
operating system session terminal for each partition. It is used for detecting, reporting and storing
changes in hardware conditions, managing system power ON/OFF for the server, acts as a service focal
point, is used to upgrade the Power Systems server micro code, and for activating Capacity on Demand.
The major functions that the HMC provides are server hardware management and virtualization
management. Using the HMC, dynamic LPAR operations can be done to change resource allocation
such as processor, memory, physical I/O and virtual I/O for a specific partition.
Page 19
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
16
The HMC can be accessed through a web-based client using web browsers and command line
interfaces. The web interface uses a tree-style navigation model that provides hierarchical views of
system resources and tasks using drill-down and launch-in-context techniques to enable direct access to
hardware resources and task management capabilities. The HMC manages advanced PowerVM
virtualization features of POWER5, POWER6, and POWER7 servers.
Three LPARs were created on the p750 servers for the three nodes Oracle Database with RAC testing
described in this paper.
Installing the AIX operating system
Installation of the operating systems will not be discussed in detail in this paper.
AIX 7.1 TL02 (7100-02-03-1334) is installed on the Oracle Database 12c RAC nodes
Prior to Oracle software installation, please make note of the following:
Be sure to create sufficient swap space appropriate for the amount of physical memory on your
servers (use “lsps -a” command). Oracle suggests that the amount of swap space should be equal to
the amount of RAM if RAM size is between 4 GB and 16 GB. For more than 16 GB of RAM, the swap
space should be 16GB.
For listing the real memory and the available swap space, use the following commands:
# /usr/sbin/lsattr -E -l sys0 -a realmem
# lsps –s
To find out the disk size use:
#bootinfo -s hdisk<#>
The above command displays the size in MB.
It is strongly recommended that every node of the cluster have an identical hardware configuration,
although it is not mandatory.
Oracle publishes the following as a minimal set of hardware requirements for each server.
Page 20
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
17
Hardware Minimum Recommended
Physical memory 4GB Depends on applications and usage
CPU 1 CPU per node
2 or more CPUs per node
Interconnect network 1Gb 2 teamed 1 Gb or 10 Gigabit
External network 100Mb 1Gb
Backup network 100Mb 1Gb
HBA or NIC for SAN, iSCSI, or NAS 1Gb HBA Dual-path storage vendor certified HBA
Oracle Home for Database 5.8 GB 15 GB or more
Oracle Grid home
(includes the binary files for Oracle Clusterware and Oracle ASM and their associated log files and future patches)
8GB 100 GB
Oracle Base 3.5GB 10 GB
Temporary disk space (/tmp) 6 GB 6 GB or more (and less than 2TB)
Table 1: Hardware requirements
Prior to installing the Oracle products, you should install the required OS packages, otherwise the Oracle
Universal Installer will provide you with the list of packages that you need to install before you can
proceed.
Check the following filesets are installed on the AIX node or Logical Partition (LPAR) using the command
“lslpp –l <fileset name(s)>”
openssh & openssl (These filesets are included in the AIX Base Operating System (BOS) media
or can be downloaded from http://sourceforge.net/projects/openssh-aix/ ).
bos.adt.base
bos.adt.lib
bos.adt.libm
bos.perf.libperfstat
bos.perf.perfstat
bos.perf.proctools
rsct.basic.rte
rsct.compat.clients.rte
xlC.aix61.rte 11.1.0.4 (or later)
Page 21
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
18
xlC.rte.11.1.0.4 or later
bash-4.2-1 (This can be downloaded from
http://www-03.ibm.com/systems/power/software/aix/linux/toolbox/date.html)
Note: Oracle Database 12c needs “bash” tool installed before starting the Oracle Grid Infrastructure
installation. Other wise there will be an error window pop up with a message “INS-06001 Failed to
perform operation due to internal driver error”. This error is also mentioned in the “My Oracle Support” site
with a Document ID 1270620.1.
You must have the IBM XL C/C++ runtime filesets for installation, but you do not need the C/C++
compilers. You do not require a license for the XL C/C++ runtime filesets.
Fixes for Authorized Problem Analysis Reports (APARs) IV45072 and IV45073 should be installed on
AIX7.1 TL02 SP03. While writing this document, the fix for the APARs (IV45072 and IV45073) were not
available publically to download from IBM Fix Central web site. The customers need to open a Problem
Management Record (PMR) either by calling IBM support (1-800-IBM-SERV) or through https://www-
947.ibm.com/support/servicerequest/Home.action for getting interim fixes (iFixes) for those APARs.
Make sure these APAR are applied. If you are using very latest AIX TL and SP like 7100-03-01, the
above issues are fixed. But the Cluster Verification tool will fail and show these specific APARs as not
applied, safely ignore such failures.
The Program Temporary Fixes (PTFs) files can be downloaded from the IBM fix central http://www-
933.ibm.com/support/fixcentral/.
Oracle 12c products support 64-bit AIX kernel and does not provide support for 32-bit kernel.
To check the AIX kernel mode, execute the following command and make sure it shows “64”.
# getconf KERNEL_BITMODE
Installing Oracle Grid Infrastructure 12.1.0.1 Before installing Oracle Grid Infrastructure 12.1.0.1 on the nodes, there are several important tasks that
need to be done on all of the cluster nodes.
Pre-Installation tasks
Checking security resource limit
To prevent denial of service attacks, Oracle recommends changing the default security resource
limits.
Edit the file “/etc/security/login.cfg” to change auth_type under the “usw” stanza from STD_AUTH to
“PAM_AUTH”.
Page 22
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
19
Make sure the presence of the line “login session required pam_aix” in /etc/pam.conf file and add the
following lines,
sshd auth required pam_aix
sshd account required pam_aix
sshd password required pam_aix
sshd session required pam_aix
Set the OpenSSH parameter LoginGraceTime to “0” using the following steps.
Using the editor open the file /etc/ssh/sshd_config and locate the line “#LoginGraceTime 2m”
and remove “#” and change “2m” to “0”. It will look like
“LoginGraceTime 0”.
Save and exit from the file.
Restart the SSH daemon.
o /usr/bin/stopsrc –s sshd
o /usr/bin/startsrc –s sshd
Configuring OS kernel parameters
Make sure the “aio_maxreqs” is set to 65536 (64K) by issuing “ioo -a |grep aio_maxreqs”. If it is not
64K, set it by “ioo –o aio_maxreqs=65536”.
Keep the default values of the virtual memory parameter values in AIX 7.1 and make sure the
following values are set using the command “vmo –aF”.
minperm%=3
maxperm%=90
maxclient%=90
lru_file_repage=0 # This parameter cannot be changed in AIX7.1
strict_maxclient=1
strict_maxperm=0
If these values are not set, use the command “vmo -p –o <parameter=new value>”
Edit the following lines to the /etc/security/limits file, -1 represents “unlimited”, the default values are:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
Verify that the maximum number of processes allowed for each user is set to 2048 or greater using
the command,
“# smitty chgsys” or /usr/sbin/chdev –l sys0 –a maxuproc=’2048’
Page 23
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
20
Oracle recommends increasing the space allocated for ARG/ENV list to 128 or greater. The size is
specified by number of 4K blocks. The default value for “ncargs” in AIX7.1 is 256. It can range
between 128 and 1024.
/usr/sbin/chdev -l sys0 –a ncargs=’1024’
Configuring network parameters
The recommended values for the network parameters in AIX when running the Oracle Database are:
ipqmaxlen=512
rfc1323=1
sb_max=4194304
tcp_recvspace=65536
tcp_sendspace=65536
udp_recvspace=655360
udp_sendspace=65536
Find out the current values for the above parameters using the “no –a” command. To set the values,
determine whether the system is running in compatibility mode or not by using the command “# lsattr
-E -l sys0 -a pre520tune”.
If the output is “pre520tune disable Pre-520 tuning compatibility mode True” ,the system is not
running in compatibility mode. If the system is not running in compatibility mode you can set the
values using the following commands:
For setting ipqmaxlen use “/usr/sbin/no -r -o ipqmaxlen=512”
For setting other parameters use “/usr/sbin/no -p -o parameter=value”
If the system is running in compatibility mode, then the output is similar to the following, showing that
the value of the pre520tune attribute is enabled: “pre520tune enable Pre-520 tuning compatibility
mode True”.
For compatibility mode, set the values by using the command ”no -o parameter_name=value” and
make the following entries in /etc/rc.net file.
If [ -f /usr/sbin/no ] ; then
/usr/sbin/no -o udp_sendspace=65536
/usr/sbin/no -o udp_recvspace=655360
/usr/sbin/no -o tcp_sendspace=65536
/usr/sbin/no -o tcp_recvspace=65536
/usr/sbin/no -o rfc1323=1
/usr/sbin/no -o sb_max=4194304
/usr/sbin/no -o ipqmaxlen=512
fi
Page 24
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
21
If you are planning to run heavy parallel queries or use high node counts, the UDP and TCP
ephemeral port range to a broader range is recommended than the default one in AIX. The default
setting of ephemeral port range can be found from the output of the following command,
# /usr/sbin/no –a | fgrep ephemeral
It would show the range between 32768 and 65536. The recommended broader range of ports are
between 9000 and 65500. To set these new values, use the following commands,
# /usr/sbin/no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500
# /usr/sbin/no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500
Network time protocol
Oracle Clusterware 12c Release1 (12.1) requires time synchronization across all Oracle RAC nodes
within a cluster when Oracle RAC is deployed. There are two ways the time synchronization can be
configured. An operating system configured network time protocol (NTP) or Oracle Cluster Time
Synchronization Service (CTSS). Oracle Cluster Time Synchronization Service is designed for
organizations whose cluster servers are unable to access NTP services.
In the lab test setup, NTP is disabled by the following command and renamed the NTP configuration
file “/etc/ntp.conf” to “/etc/ntp.bak”.
# stopsrc –s xntpd
Creating users and groups
Oracle recommends creating the following operating system groups and users for all installations
where separate software installation needs its own user. The operating system groups are oinstall,
dba, asmdba, asmadmin and asmoper. The users are grid and oracle.
On each of the Oracle RAC nodes the group ID and user ID number should be the same.
# mkgroup -'A' id='1000' adms='root' oinstall
# mkgroup -'A' id='2000' adms='root' dba
# mkgroup -'A' id='3000' adms='root' oper
# mkgroup -'A' id='4000' adms='root' asmadmin
# mkgroup -'A' id='5000' adms='root' asmdba
# mkuser id='1100' pgrp='oinstall' groups='dba,oper,asmadmin,asmdba' home='/home/grid'
grid
# mkuser id='1101' pgrp='oinstall' groups='dba,oper,asmdba' home='/home/oracle' oracle
For the lab test, the user grid is used for installing the Oracle Grid Infrastructure software and the user
oracle is used for installing the Oracle RAC software.
Capability setting for grid user is required to succeed Oracle Grid Infrastructure installation.
# /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,
CAP_PROPAGATE grid
Create a separate filesystem (For example: /u01) and create separate directory (ORACLE_HOME)
for the Oracle Grid Infrastructure software and the Oracle Database software
Page 25
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
22
# mkdir -p /u01/app/12.1.0/grid
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oracle
# chown -R grid:oinstall /u01
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/
With 12c Release 1 of both products there are two separate ORACLE_HOME directories: one home
for Oracle Grid Infrastructure; and the other home for Oracle Database. To execute commands like
“asmca” for Oracle ASM configuration or DBCA for database configuration, you will need to change
the ORACLE_HOME environment variable to the Oracle Database home directory.
Setting Oracle inventory location
When you install Oracle software on the system for the first time, a file called oraInst.loc will be
created under the /etc directory. The file stores information about where the Oracle inventory
directory is and the name of the Oracle Inventory group.
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
If a previous inventory directory exists, please make sure that the same Oracle inventory directory is
used and all Oracle software users have write permissions to this directory.
Setting up network files
The following network addresses are required for each node:
Public network address
Private network address
Virtual IP network address (VIP)
Single Client Access Name (SCAN) address for the cluster
The interfaces and IP addresses for both public and private networks need to be set up. After setting
up the IP addresses for public and private IP addresses, /etc/hosts will look like the following.
129.40.71.192 p128n192.pbm.ihost.com p128n192 # RAC node 1
129.40.71.193 p128n193.pbm.ihost.com p128n193 # RAC node 2
129.40.71.194 p128n194.pbm.ihost.com p128n194 # RAC node 3
129.40.71.185 p128n185.pbm.ihost.com p128n185 # VIP set in p128n192
129.40.71.186 p128n186.pbm.ihost.com p128n186 # VIP set in p128n193
129.40.71.187 p128n187.pbm.ihost.com p128n187 # VIP set in p128n194
10.10.10.1 rac1-pvt # Private network 1 for RAC node1
10.10.20.1 rac1-pvt # Private network 2 for RAC node1
Page 26
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
23
10.10.10.2 rac2-pvt # Private network 1 for RAC node2
10.10.20.2 rac2-pvt # Private network 2 for RAC node2
10.10.10.3 rac3-pvt # Private network 1 for RAC node3
10.10.20.3 rac3-pvt # Private network 2 for RAC node3
For private interconnect between Oracle RAC nodes, two interfaces were configured on each of the
nodes. These interfaces will be used automatically by Oracle High Available IP (HAIP) network
functionality for load balancing and high availability.
From Oracle Database 11g Release 2, Single Client Access Name (SCAN) was introduced. It needs
a static IP address which should be resolved in the Domain Name Server (DNS). The SCAN IP
address needs to not be placed in /etc/hosts. Oracle recommends three IP addresses for SCAN. It is
a single DNS entry with three IP addresses attached to a name and set to round robin. For the lab
test, three IP addresses were used:
SCAN IP address:
RAC12c-scan 129.40.71.189 RAC12c-scan 129.40.71.183 RAC12c-scan 129.40.71.184
All of the public IP, VIP and the SCAN IP addresses should be in the same subnet.
To make sure the SCAN IPs are set with a round-robin method in the DNS server, use the command
“nslookup RAC12c-scan” to see them rotating the IP addresses every time the command is executed.
$ nslookup RAC12c-scan
Server: 129.40.71.195 Address: 129.40.71.195#53
Name: RAC12c-scan.pbm.ihost.com Address: 129.40.71.183
Name: RAC12c-scan.pbm.ihost.com Address: 129.40.71.184
Name: RAC12c-scan.pbm.ihost.com Address: 129.40.71.189
$ nslookup RAC12c-scan
Server: 29.40.71.195 Address: 129.40.71.195#53
Name: RAC12c-scan.pbm.ihost.com Address: 129.40.71.189
Name: RAC12c-scan.pbm.ihost.com Address: 129.40.71.183
Page 27
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
24
Name: RAC12c-scan.pbm.ihost.com Address: 129.40.71.184
Configuring SSH on all cluster nodes
Oracle Database 12c Release 1 needs SSH and it should be setup on the Oracle RAC nodes to login
each other without the password. This is done either manually or with an Oracle provided script
“sshUserSetup.sh” or by the Oracle Universal Installer (OUI).
In the lab test, Oracle Universal Installer (OUI) was used to configure the SSH on the nodes while
installing Oracle Grid Infrastructure.
Configuring shared disks for OCR, voting and database
Starting with version 11g Release 2, the Oracle Clusterware voting disk and OCR can be stored in ASM.
Oracle strongly recommends storing Oracle Clusterware disks in ASM. However, Oracle Clusterware
binaries and files cannot be stored in an Oracle ASM Cluster File System (ACFS). Oracle recommends at
least 300 MB minimum for each voting disk and 400 MB for each OCR file. The total required values are
cumulative and it depends on the level of redundancy you choose during the installation.
Redundancy
Level
Minimum
Number of
Disks
Oracle Cluster
Registry (OCR)
Files
Voting Files
Both File
Types
External 1 400 MB 300 MB 700 MB
Normal 3 800 MB 900 MB 1.7 GB *
High 5 1.2 GB 1.5 GB 2.7 GB
Table 2: Minimum number of Disks for Oracle OCR and Voting files
* If ASM disk group is planned to use, then it must be at least 2 GB size.
In our example, Oracle Clusterware disks (OCR and voting disks) will be stored in the Oracle ASM
diskgroup. Oracle ASM disks will need to be created prior to installation with correct ownership and
permission. All of the disks should be shared across Oracle RAC nodes.
Oracle RAC OCR, voting disks are created on one ASM diskgroup and another ASM Diskgroup is
created for the database files.
For disks in ASM diskgroups for OCR, voting disks and database
#chown grid:asmadmin /dev/rhdisk<#>
#chmod 660 /dev/rhdisk<#>
In a situation, if the disk names which are going to be used for creating an ASM Diskgroup are not consistent across the cluster nodes, the ASM can manage identifying storage devices and creates ASM
Page 28
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
25
diskgroup. However, it is useful to make the names of the disks consistent across the cluster nodes. Using the command “rendev”, the inconsistent shared device names can be given a meaningful name. When rendev is used to rename a disk both the block and character mode devices are renamed. If the device being renamed is in the Available state, the rendev command must unconfigure the device before renaming it. If the unconfigure operation fails, the renaming will also fail. If the unconfigure succeeds, the rendev command will configure the device, after renaming it, to restore it to the Available state. In the process of unconfiguring and reconfiguring the device the ownership and permissions will be reset to the default values. So after renaming the disk device the ownership and permission should be checked and if necessary changed to values required by your Oracle RAC installation. Device settings stored in the AIX ODM, for example reserve_policy, will not be changed by the renaming process. Some disk multipathing solutions my have problems with device renaming. At least some versions of EMC PowerPath and some IBM SDDPCM tools (IBM storage MPIO tools) have dependencies on disk names, which can lead to problems when disks are renamed. For this reason, device renaming should only be used with the AIX native MPIO device driver, unless confirming with the storage vendor your storage solution is compatible with renaming the disks.
Running Cluster Verification Utility (CVU)
The Cluster Verification Utility (CVU) can be used to verify that the systems are ready to install Oracle
Clusterware 12c Release 1. The Oracle Universal Installer will use CVU to perform all pre-requisite
checks during the installation interview. Login as user grid and run the following command:
$./runcluvfy.sh stage -pre crsinst -n p128n192,p128n193,p128n194 -verbose
The cluster verification test might show unsuccessful because the required OS patches are not found in
the nodes. It doesn’t mean that the current installed OS level does not have the fixs. These OS patches
are required to the lower level (7100-01 or less) of current installed AIX level (7100-02-03). The current
level has mostly all of the fixes and two APARs with i-fixes and three APARs are not applicable to the
current version , so we can ignore the failure report.
AIX7.1 TL03 and later includes fixes for all of these (APARs).
The following table shows the APARs (column 1) which are shown in the output of “runcluvfy.sh” and
“runInstaller” of Oracle Grid Infrastructure. The column 2 shows the APARs which are corresponding to
the same APARs in column 1, but available and included as part of AIX7.1 TL02 SP03.
Oracle GI “runInstaller” Warning for AIX APARs
Fix included in AIX7.1 TL02 SP03
Description of the APAR Comments
IV19836 IV38845 “ifconfig” MONITOR option stopped working IV39136 IV40005 LINK FAILS WITH UNDOCUMENTED COMPILER FLAG AND
THREAD-LOCAL STG
IV41415 IV39987 RUNTIME LINKING FAILED TO BIND THE BSS SYMBOL EXPORTED FROM MAIN
IV34869 IV30318 THREAD_CPUTIME() RETURNS INCORRECT VALUES IV35057 IV30320 LOADING 5.3 TLS ENABLED LIBS BY 5.2 APPS CAUSED CORE
DUMP IN 32B
IV21116 IV21878 SYSTEM HANGS OR CRASHES WHEN APP USES SHARED SYMTAB CAPABILITY
IV21235 IV19357 SYSTEM CRASH DUE TO FREED SOCKET WHEN SOCKETPAIR() CALL USED APPLIES
Page 29
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
26
IV16737 IV38857 JAVA WON'T INSTANTIATE IF PROT_NONE USED FOR SHARED MMAP REGION
IV28925 IV27105 SHLAP PROCESS FAILS WHEN SHARED SYMBOL TABLE FEATURE IS USED
IV45072 Get iFix A SPECIAL-PURPOSE LINKER FLAG WORKS INCORRECTLY Get temporary iFix by opening a PMR
IV45073 Get iFix ADD ABILITY TO REORDER TOC SYMBOLS IN LIMITED CIRCUMSTANCES
Get temporary iFix by opening a PMR
IV33857 N/A
NSLOOK SHOULD NOT MAKE REVERSE NAME RESOLUTION OF NAME SERVERS
IV33857 is specific to AIX7.1 in TL01 SP03; this is fixed in SP04 and later releases.
IV45070 N/A NEW FUNCTION Fix included in AIX7.1 TL03
IV51534 N/A SUPPORT FOR FUTURE TL/SP - United States Fix included in AIX7.1 TL03
IV30579
N/A
SUPPORT FOR FUTURE TL/SP - United States This APAR warning shows in OUI of Oracle 12c Database. This fix is included in AIX7.1 TL03
Table 3: Warnings in Oracle OUI of Grid Infrastructure and fixes in AIX7.1 TL02 SP03
Performing Oracle Clusterware installation and Automatic Storage Management installation
To install Oracle Clusterware 12c Release 1, Oracle Database 12c Release 1 Grid Infrastructure
(12.1.0.1) for AIX, needs to be downloaded. After that, unzip “aix.ppc64_12c_grid_10f2.zip &
aix.ppc64_12c_grid_20f2.zip” and run the Oracle Universal Installer (OUI) from one of the nodes. For the
most part, OUI handles the installation of the other cluster nodes.
Running the installation from the system console will require an XWindows session, or you can run it
through VNC-Viewer software running on a remote host and connecting to a VNC-Server running on one
of the nodes to start the OUI.
1. Execute “rootpre.sh” as a root user. This file is located at “grid” directory which was created when
you have unzipped the Oracle Grid Infrastructure files.
# ./rootpre.sh
2. Run “./runInstaller” as grid user, in the first screen you can choose the option “Skip software
updates” since this is a fresh install.
Page 30
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
27
3. . The next screen will ask you to select one of the installation options. In this example, we select
“Install and Configure Grid Infrastructure for a Cluster”.
Page 31
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
28
4. The next screen will ask if this is a typical or advanced installation. We select “Typical
Installation”.
The next screen shows the product languages, by default “English” is selected.
Page 32
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
29
5. The next screen asks for the SCAN and its cluster node names and virtual IP addresses. If this is
the first installation, put in the OS password for user grid and click “setup” if the SSH setup was
not done prior to start installing Oracle Grid Infrastructure. Since SSH is setup using the script on
the Oracle RAC nodes, enter the password for the user grid. While moving to the next screen OUI
automatically tests the SSH setup. Also, you can click “Test” to make sure that the SSH worked
properly between the nodes. Since we have chosen “Typical Installation”, it needs a SCAN name
that can resolve with the DNS. If you choose “Advanced installation”, a Grid Naming Services
(GNS) and its associated information will be required.
Page 33
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
30
After the “SSH” setup is done, it displays the pop-up window
Click on “Identify network interfaces” to see and select the network interfaces for public and
private networks. For our test, one network interface “en0” for public and two network interfaces
(en1 and en2) for private network of the cluster.
Page 34
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
31
After clicking “Next”, OUI validates the nodes, SSH setup and public & private interfces.
Page 35
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
32
6. The next screen will ask you for the Oracle base and software directories. In this example, all
Oracle Clusterware files (OCR and Voting) are going to be stored in ASM diskgroup. Then, enter
the password for SYSASM. Oracle would like the password to conform to specific rules. If you
did not follow these rules for a valid password, errors will be shown at the bottom of the screen.
In Oracle12c Grid Infrastructure installation, Oracle recommends to allocate 100GB for Grid
Home to allow space for patches. Otherwise, it shows a warning message as below. In a
production environment, it is recommended to allocate additional 100GB of space for the Grid
ORACLE_HOME. Click “Next”
Page 36
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
33
Since ASM is chosen to be the storage type for the Clusterware files, the install process then
asks for the names of the ASM disks and it will create the Disk Group Name with the selected
ASM disks to store the OCR and voting disks. The number of disks needed for installation
depends on the redundancy level you picked. For high redundancy five disks are required, for
normal redundancy three disks are required, and for external redundancy one disk is required. If
you do not select enough disks an error message will be given. The minimum size of each disk is
280 MB. For this example, High redundancy has been chosen.
Page 37
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
34
Click “next” and enter the inventory location if the default one is not used.
Click “Next”.
Page 38
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
35
Starting with Oracle Database 12c Release 1, Oracle Universal Installer program provides
options for automatically running root configuration scripts required during the Oracle Grid
Infrastructure installation. We have chosen the option “Automatically run configuration scripts”.
This will automatically execute root configuration scripts. On this screen, provide the password if
“root” user is directly used or provide the other information for “sudo” users. Click “next”.
7. Next the Cluster Verification Utility is run to check if the cluster nodes have met all the
prerequisites. If not, the installation will stop and show you the errors. You can fix the errors and
ask run the check again. At the bottom of the screen, you can click on more details where
suggestions on how to fix the errors will be shown.
Page 39
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
36
The failure seen earlier by the Cluster Verification Utility for the AIX OS patches will also be
shown by the OUI at this point. Again we can ignore it since the OS patches are included in the
installed OS level.
Page 40
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
37
After fixing all the errors and passing the prerequisites tests the installation summary is shown.
You can save the response file for future silent installation if desired.
8. This is the screen showing the installation process and pops up a window to confirm the user who
is going to run the root configuration scripts as a privileged user. In our case “root’ user is a
privileged user. Click “Yes”.
9. After Oracle has installed the binary files on all cluster nodes, root configuration scripts are
automatically executed as user root on all of the nodes.
10. OUI will continue to configure the Oracle Grid Infrastructure for a cluster.
Page 41
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
38
11. After the successful installation of Oracle Grid Infrastructure it shows the following screen. As part
of the installation, it automatically starts and brings up the cluster resources online on all of the
nodes.
Page 42
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
39
After you press OK and continue, the Oracle Grid Infrastructure installation has completed.
Please check the configuration log file for more details if there are any other failures during the installation
and configuration process. The configuration log file is located in the Oracle Inventory location.
Performing post-installation tasks To confirm Oracle Clusterware is running correctly, use this command as grid user:
$CRS_HOME/bin/crsctl status resource –w "TYPE co 'ora'" -t
$ crsctl status resource -w "TYPE co 'ora'" -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.CRS_VOTE.dg ONLINE ONLINE p128n192 STABLE ONLINE ONLINE p128n193 STABLE ONLINE ONLINE p128n194 STABLE ora.LISTENER.lsnr ONLINE ONLINE p128n192 STABLE ONLINE ONLINE p128n193 STABLE ONLINE ONLINE p128n194 STABLE ora.asm ONLINE ONLINE p128n192 STABLE ONLINE ONLINE p128n193 Started,STABLE ONLINE ONLINE p128n194 Started,STABLE ora.net1.network
Page 43
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
40
ONLINE ONLINE p128n192 STABLE ONLINE ONLINE p128n193 STABLE ONLINE ONLINE p128n194 STABLE ora.ons ONLINE ONLINE p128n192 STABLE ONLINE ONLINE p128n193 STABLE ONLINE ONLINE p128n194 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE p128n194 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE p128n193 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE p128n192 STABLE ora.cvu 1 ONLINE ONLINE p128n192 STABLE ora.oc4j 1 OFFLINE OFFLINE STABLE ora.p128n192.vip 1 ONLINE ONLINE p128n192 STABLE ora.p128n193.vip 1 ONLINE ONLINE p128n193 STABLE ora.p128n194.vip 1 ONLINE ONLINE p128n194 STABLE ora.scan1.vip 1 ONLINE ONLINE p128n194 STABLE ora.scan2.vip 1 ONLINE ONLINE p128n193 STABLE ora.scan3.vip 1 ONLINE ONLINE p128n192 STABLE --------------------------------------------------------------------------------
Another command, “crsctl check cluster -all”, can also be used for cluster check.
$ crsctl check cluster -all
$ crsctl check cluster -all
**************************************************************
p128n192:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
p128n193:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
p128n194:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
Page 44
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
41
CRS-4533: Event Manager is online
*************************************************************
Finally, the command, “crsctl check crs”, can also be used for a less detailed system check.
$ crsctl check crs
CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
To check the status of the ASM,
$ srvctl status asm
ASM is running on p128n192,p128n193,p128n194
Always look at the Oracle Grid Infrastructure log file located at
CRS_HOME/log/hostname/alerthostname.log for any errors and warnings.
Installing Oracle Database 12c Release 1 (12.1.0.1)
Pre-Installation tasks
All of the pre-installation tasks for Oracle Database 12c Release 1 are done before installing the Oracle
Grid Infrastructure software. No other specific tasks are needed except the cluster verification test for pre-
database configuration.
If you have decided to use ASM for storing database files, create the diskgroup using the asmca utility
which is part of clusterware before starting to install and create the database. The ASM diskgroup name
for the database files will be asked for in one of the following screens.
Running Cluster Verification Utility
The Cluster Verification Utility can be used to verify if the systems are ready to install Oracle
Database 12c Release 1 with Oracle RAC.
The command “cluvfy.sh stage –pre dbcfg –n nodelist –d $ORACLE_HOME” is used to pre-check
requirements for an Oracle Database with Oracle RAC installation. Login as user oracle and run the
cluvfy command.
$./runcluvfy.sh stage -pre dbcfg -n p128n192,p128n193,p128n194 -d /u01/app/oracle
The cluster verification could be unsuccessful for the same reason as seen in the Oracle Grid
Infrastructure installation. Most of the required OS patches are included in the currently installed
higher version of the OS level AIX 7.1 TL02 SP03. Only two APARs IV45072 and IV45073 need
interim fixes (i-Fixes). After applying the i-Fixes, we can ignore the failures. These fixes are not
needed if the AIX version is 7100-03-01 or later.
Page 45
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
42
Preparing Oracle home and its path
The Oracle home path must be different from the Oracle Clusterware home. In other words, Oracle
Database 12c Release 1 with RAC cannot be installed onto the same home as the Oracle
Clusterware software.
Performing database installation 1. Download and unzip “aix.ppc64_12c_database_1of2.zip” and
“aix.ppc64_12c_database_2of2.zip” from
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html and go to
the database directory and execute ./rootpre.sh as root user..
2. Login as user oracle. The installation needs to be run in XWindows or through Realvnc.
3. Execute ./runInstaller, the first screen asks for your email address. You can choose to provide
your email address in order to receive security updates from My Oracle Support, you will need to
provide the password of your email address (username) for the My Oracle Support web site.
4. The next screen shows the options to download the software updates. Since this is a fresh install,
you skip this screen.
Page 46
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
43
5. The next screen provides the user different installation options. In this example, we will be
installing database software only.
Page 47
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
44
6. The next screen asks for the type of installation you want to perform, For this example, “Oracle
Real Application Cluster Database Installation” is selected.
7. The next screen shows a list of nodes in the cluster and select them all of them to be RAC nodes.
Click “SSH connectivity” and provide OS password for the “oracle” user. Click “setup” to set the
SSH connectivity between the nodes for the user “oracle”.
Page 48
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
45
Page 49
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
46
8. The next screen asks you to select the language in which the product will run.
9. The next screen shows the options for editions. For our test, “Enterprise Edition” is selected.
Page 50
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
47
10. The next screen asks for the installation location, enter “Software Location” and “Oracle base”.
Page 51
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
48
11. The next screen prompts to specify the name of the OSDBA, OSOPER, OSBACKUPDBA,
OSDGDBA and OSKMDBA groups. Members of these groups are granted operating system
authentication for the set of database system privileges each group authorizes. Providing different
group for different administration purposes would restrict user access to Oracle software by
responsibility areas for different administrator users. This configuration is an optional one. For our
test, “dba” is given for all groups except “Database operator”.
12. Click“Next” to verify the target environment meets minimum installation and configuration
requirements for Oracle Real Application Cluster Database.
Page 52
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
49
As seen in the Oracle Grid Infrastructure installation and output of the Cluster Verification tool, the
OUI also shows the missing OS patches. This can be safely ignored if all of the APARs are
installed as explained in the clusterware installation section with the table “Table 3”. All of the AIX
patches warning are already displayed with the clusterware OUI except IV30579. This patch is
same as IV51534.
Page 53
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
50
13. The next screen shows the summary of the inputs given to database binaries installation.
14. This screen shows the installation progress of the Oracle RAC database binaries installation.
Page 54
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
51
15. This is the last step of the database binaries installation process. Execute root.sh from the
software location mentioned in the pop-up window on all cluster nodes as user root.
The output of “root.sh” from all the cluster nodes should be the same.
16. This is the end of the database installation process.
Page 55
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
52
Creating Oracle Real Application Cluster Database – Container database
In Oracle Database 12c, a multitenant architecture is introduced. The multitenant architecture offers a
multitenant container database where many pluggable databases can be created. All these pluggable
databases in a single multitenant container database share same memory and background processes.
In our test lab, a container database will be created and one pluggable database will be created on that
container database using Database configuration Assistant (DBCA) tool.
Use “RealVNC” client tool to connect to AIX LPAR where the Oracle Database binaries are installed, go
to the directory $ORACLE_HOME/bin and execute “dbca” command as an oracle user.
$ dbca
1. This command opens up a window and select “Create Database”. Click “Next”.
Page 56
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
53
2. Click “Advanced Mode” option in the following screen. The other option is creating database with
default option.
Page 57
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
54
3. Select Database type as Oracle Real Application Clusters (RAC) database. For our test, we selected
“Policy-Managed” for Configuration Type and the template “General Purpose or Transaction
Processing”. Click “Next”.
4. Provide global database name and select “Create As Container Database” and “Create an Empty
Container Database” for creating only the container database. We are creating a pluggable database
later. Click “Next”.
Page 58
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
55
5. In the following screen asks for a server pool name for the database. From Oracle Clusterware 11g
release 2 (11.2), the server pools were introduced. The server pools are a logical group of servers
managed by the Oracle Clusterware. Resources are will not be defined to a specific instance or node.
Instead, the priority of resource requirement is defined.
A policy-managed database is defined by cardinality, which is the number of database instances you
want running during normal operations. The policy-managed database runs in more than one
database server pools and it can run on different servers at different times. A database instance
starts on all servers that are in the server pools defined for the database.
Page 59
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
56
6. In the following screen, Enterprise Manager (EM) Database Express is selected. The database also
can be managed through the Enterprise Manager (EM) cloud control. Click “next”.
7. Provide Password for the administrative user accounts and Click “Next”
Page 60
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
57
8. Provide storage options for creating database files. For our test, ASM disk group is used. Click “Next”.
Page 61
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
58
9. If you want database vault configured for database security, configure the following options, otherwise
Click “Next”.
10. Configure memory, sizing, Character sets and Connection Mode as required for nature of the
workload and application. Click “Next”.
Page 62
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
59
11. Select “Create Database” and Click “Next”.
12. The next screen shows the progress of prerequisite checks. After the checking is done, it shows the
results in the following screen.
Page 63
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
60
13. The result of the prerequisite check is shown in the following screen. It shows the warning for the
Cluster Validation Checks.
14. The warnings showed here are for the AIX patches. They are all taken care while installing
clusterware and database software. Safely ignore them by clicking “Ignore All”. Click “Next”.
Page 64
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
61
15. This is the summary page for the input given for creating the container database.
16. This screen shows the progress of Database creation.
The following screen shows status of the database creation and a pop-up window gives an option to
unlock the necessary user accounts based on the requirement. Click “Exit”.
Page 65
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
62
Click “Close” to close the DBCA tool.
Page 66
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
63
Creating Oracle Real Application Cluster Database – Pluggable database
After the container database is created as shown in the previous section, pluggable database can be
created using DBCA tool. Start DBCA as an “oracle” user.
1. $ORACLE_HOME/bin/dbca
Select “manage Pluggable Databases” and click “Next”.
2. Select “Create a Pluggable Database” and Click “Next”
Page 67
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
64
3. The container database “rac_cdb” which was selected automatically since there is only one DB is
created so far.
Page 68
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
65
4. Select “Create a new Pluggable Database” option and Click “Next”
5. Provide pluggable database name, and specify common storage location for database files. In
our test, ASM diskgoup is selected, select “Create Default User Tablespace” option and provide
PDB username and password in the respective fields for them to be created. Click “Next”.
Page 69
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
66
This is the final summary page which shows all the input given to DBCA tool.
Click “Finish”. The following screen shows the progress of the pluggable database creation.
Page 70
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
67
Post installation tasks Check the status of the newly created Databases. $ export ORACLE_SID=raccdb_1 $ sqlplus /nolog
SQL*Plus: Release 12.1.0.1.0 Production on Thu Jan 23 18:40:12 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
SQL> connect /as sysdba
Connected.
SQL> select name,open_mode from v$containers;
NAME OPEN_MODE ------------------------------ ----------
CDB$ROOT READ WRITE PDB$SEED READ ONLY RACDB_PDB1 READ WRITE
SQL> select name , open_mode from v$pdbs;
NAME OPEN_MODE ------------------------------ ----------
PDB$SEED READ ONLY RACDB_PDB1 READ WRITE
SQL>
The open mode of the pluggable database is shown as “READ WRITE”.
Connecting to the pluggable database
While creating the pluggable database, a default service name is created automatically with the same
name of the database. In our testing, creation of the pluggable database “rac_pdb1” created a service
with same name automatically.
Make an additional entry to tnsnames.ora located at $ORACLE_HOME/network/admin as it was
automatically created for CDB.
RAC_CDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC12c-scan)(PORT = 1521))
(CONNECT_DATA =
Page 71
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
68
(SERVER = DEDICATED)
(SERVICE_NAME = rac_cdb)
)
)
racdb_pdb1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC12c-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = racdb_pdb1)
)
)
Check the connection to the pluggable database using the service added in tnsnames.ora file.
$ sqlplus sys/<password>@racdb_pdb1 as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jan 24 13:41:43 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL>
Another way to create a new service for the pluggable database is using SRVCTL utility.
$ srvctl add service -db raccdb -service rac_pdb1_oe -pdb racdb_pdb1 -serverpool rac_db_srv_pool
After adding the service “rac_pdb1_oe”, make an entry to tnsnames.ora
racdb_pdb1_oe =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC12c-scan)(PORT = 1521))
(CONNECT_DATA =
Page 72
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
69
(SERVER = DEDICATED)
(SERVICE_NAME = racdb_pdb1_oe)
)
)
The newly created service needs to be started to use it.
To check the status of the newly created service,
$ srvctl status service -db rac_cdb -service racdb_pdb1_oe
Service racdb_pdb1_oe is not running.
To start the service,
$ srvctl start service -db rac_cdb -service racdb_pdb1_oe
Again, check the status of the service,
$ srvctl status service -db rac_cdb -service racdb_pdb1_oe
Service racdb_pdb1_oe is running on nodes: p128n192,p128n193,p128n194
To list the services created for a pluggable database, connect to the database using one of the services created for that pluggable database and query from “all_services”.
$ sqlplus sys/<password>@racdb_pdb1_oe as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jan 24 14:43:44 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> select name, pdb from all_services;
NAME PDB ------------------------------------------------------
racdb_pdb1 RACDB_PDB1
racdb_pdb1_oe RACDB_PDB1
Page 73
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
70
Monitoring and Managing database using Enterprise Manager Database Express
Oracle Enterprise Manager Database Express is a web based tool for managing Oracle Database 12c.
This is built while creating a RAC CDB and offers support to do administrative tasks such as user and
storage management. It also provides comprehensive solutions for database performance diagnostics
and tuning.
To access the Enterprise Manager Database Express, use the web browser as below with the address
https://<RACnodeIP>:5500/em/
Login as “SYS” with its password and select “as sysdba”.
Page 74
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
71
For a comprehensive and complete database management, use Oracle Enterprise Manager Cloud
Control 12c.
Page 75
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
72
Summary Oracle Database 12c Release 1 offers many new features. Many of the new features consolidates the
databases, further optimize the performance, scalability and failover mechanisms of Oracle Real
Application Clusters. These new features make implementing Oracle RAC easier and give you the
flexibility to add nodes.
It is important to make sure that the Oracle Clusterware installation is successful and functional before
proceeding to the Oracle Database installation. This is because Oracle Clusterware daemons make sure
that all applications startup during system startup and any failed applications will be started automatically
to maintain the high availability aspect of the cluster.
Last but not least, choosing the hardware, operating systems and storage for the Oracle RAC deployment
is a very significant step. Having the right combination of all options will contribute to the success of the
installation and implementation on the IBM Power Systems, AIX and IBM System Storage platforms.
Page 76
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
73
References
Oracle documentation All of the documents are available at the following link
http://www.oracle.com/pls/db121/portal.all_books
IBM documentation IBM POWER systems servers
http://www-03.ibm.com/systems/power/hardware/
For more information on HMS, visit the following link, “Hardware Management Console V7
Handbook”.
IBM and Oracle Web sites
These Web sites provide useful references to supplement the information contained in this document:
IBM Power Systems p570:
http://www-03.ibm.com/systems/power/hardware/570/index.html
IBM System Storage product offerings:
http://www-03.ibm.com/systems/storage/disk
IBM NAS offerings such as IBM System Storage N3000, N3700, N5000 and N7000:
http://www-03.ibm.com/systems/storage/nas
IBM SDDPCM Multipath driver:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000201#DS6K IBM RedBooks
http://www.redbooks.ibm.com IBM Techdocs (White Papers)
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/WhitePapers
IBM ISV Solutions for Oracle
http://www-03.ibm.com/systems/storage/solutions/isv/#oracle
Oracle Real Application Clusters
http://www.oracle.com/technology/products/database/clustering Technology supported by Oracle with Oracle Real Application Clusters, please visit:
http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new.html My Oracle Support (formerly Oracle Metalink)
https://support.oracle.com/CSP/ui/flash.html
Page 77
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
74
About the author Ravisankar Shanmugam is a Senior IT Specialist with IBM Oracle International Competency Center
based in Foster City, CA. He provides Power Systems and System x platform support for projects at the
Competency Center and for enablement activities at Oracle Corporation in Redwood Shores, CA.
Page 78
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
75
Appendix A: List of common abbreviations and acronyms ASM Automatic Storage Management
A feature of Oracle Database 11g that provides an integrated cluster file system and volume management capabilities.
FC Fibre Channel
A gigabit-speed network technology primarily used for storage networking.
GHz Gigahertz
Represent computer processor speed.
HBA Host bus adapter
It connects a host system to other network and storage devices.
HDD Hard Disk Drive
A non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces.
I/O Input / Output
The communication between an information processing system and the outside world.
iSCSI Internet Small Computer System Interface
An Internet Protocol (IP)-based storage networking standard for linking data storage facilities developed by the Internet Engineering Task Force (IETF).
LUN Logical Unit Number
It is a subnet of a larger physical disk or disk volume. It can be a single disk drive, or a partition of a single disk drive or disk volume from a RAID controller. It represents a logical abstraction or virtualization layer between the physical disk device/volume and the applications.
MB Megabyte
For processor storage, real and virtual storage, and channel volume, 2 to the 20th power or 1,048,576 bytes. For disk storage capacity and communications volume, 1 000 000 bytes.
Mb Megabit
For processor storage, real and virtual storage, and channel volume, 2 to the 20th power or 1 048 576 bits. For disk storage capacity and communications volume, 1 000 000 bits.
NAS Network-attached storage
File-level data storage connected to a computer network providing data access to heterogeneous network clients.
NIC Network interface controller
Hardware that provides the interface control between system main storage and external high-speed link (HSL) ports.
OCFS Oracle Cluster File System
A consistent file system image across the servers in a cluster.
OCFS2 Oracle Cluster File System Release 2
The next generation of the Oracle Cluster File System for Linux. It is a general-purpose file system that can be used for shared Oracle home installations.
OCR Oracle Cluster Registry
A file that contains information pertaining to instance-to-node mapping, node list and resource profiles for customized applications in the Clusterware.
RAC Real Application Cluster
A cluster database with a shared cache architecture that supports the transparent deployment of a single database across a cluster of servers.
RDAC Redundant Disk Array Controller
It provides redundant failover/failback support for the logical drives of the storage server.
RHEL5 Red Hat Enterprise Linux 5
Linux operating systems released in March 2007 and it is based on the Linux 2.6.18 kernel.
SAN Storage area network
A dedicated storage network tailored to a specific environment, combining servers, storage products, networking products, software, and services.
SAS Serial Attached SCSI
A communication protocol for direct attached storage (DAS) devices. It uses SCSI commands for interacting with SAS End devices.
SCSI Small Computer System Interface
(1) An ANSI-standard electronic interface that allows personal computers to communicate with peripheral hardware, such as disk drives, tape drives, CD-ROM drives, printers, and scanners faster and more flexibly than previous interfaces.
SLES SUSE Linux Enterprise Server
A Linux distribution supplied by Novell.
Page 79
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
76
Trademarks and special notices © Copyright. IBM Corporation 1994-2014. All rights reserved.
References in this document to IBM products or services do not imply that IBM intends to make them
available in every country.
IBM. the IBM logo, ibm.com, AIX, Micro-Partitioning, Power, Power Systems, PowerVM, System Storage
and System x are trademarks or registered trademarks of International Business Machines Corporation in
the United States, other countries, or both:
Java and all Java-based trademarks are trademarks of Oracle Corporation, Inc. in the United States,
other countries, or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows Server, and the Windows logo are trademarks of Microsoft Corporation in
the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
The information provided in this document is distributed “AS IS” without any warranty, either express or
implied.
The information in this document may include technical inaccuracies or typographical errors.
Information concerning non-IBM products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of
such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. IBM has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims
related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the
supplier of those products.
All statements regarding IBM future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller
for the full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a
definitive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in IBM product announcements. The
information is presented here to communicate IBM's current investment and development activities as a
good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending
upon considerations such as the amount of multiprogramming in the user's job stream, the I/O
configuration, the storage configuration, and the workload processed. Therefore, no assurance can be
given that an individual user will achieve throughput or performance improvements equivalent to the
ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Page 80
Oracle Database 12c Release 1 Enterprise Edition and Oracle Real Application Clusters on IBM Power Systems with AIX7.1 http://www.ibm.com/support/techdocs © Copyright 2014, IBM Corporation
77
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part
of the materials for this IBM product and use of those Web sites is at your own risk.