How to determine the CRSVersion?Filed under:Clusterware
Tags:Clusterware syedracdba @ 4:33 pmThe active version or the
software version of Cluster (CRS) can be verified with following
commands. These versions details are required while upgrading a
cluster.To check the Active VersionRun the following command on the
local node.$ crsctl query crs activeversionCRS active version on
the cluster is [10.2.0.3.0]Note:The active version is the lowest
software version running in a cluster.To check the Software
VersionRun the following command on the local node.$ crsctl query
crs softwareversionCRS software version on node [racnod01] is
[10.2.0.3.0]Note:The software version is the binary version of the
software on a particular cluster node.What is Oracle RAC One
Node?
Oracle introduced a new option called RAC One Node with the
release of 11gR2 in late 2009. This option is available
withEnterpriseeditiononly. Basically, it provides a
coldfailoversolution for Oracle databases. Its a single instance
ofOracle RACrunning on one node of the cluster while the 2nd node
is in a cold standby mode. If the instance fails for some reason,
then RAC One Node detects it and first tries to restart the
instance on the same node.The instance is relocated to the 2nd node
in case there is a failure or fault in 1st node and the instance
cannot be restarted on the same node. The benefit of this feature
is that it automates the instance relocation without any downtime
and does not need a manual intervention. It uses a technology
called Omotion, which facilitates the instance
migration/relocation. RAC one is Oracles answer or solution to OS
clustering solution likeVeritas Storage Foundation,Sun Solaris
cluster,IBM HACMP, and HP Service guard etc.
Purpose
Its Oracles attempt to tie customers to a single vendor by
eliminating the need to buy 3rd party OS cluster solutions. First,
it introducedOracle Clusterwarewith 10g and stopped the need to
rely on 3rd party cluster software and now it intends to conquer
the rest who are still usingHACMP,Sun Solaris clusteretc. for cold
failover.
Benefits
The Oracle RAC One node provides the following benefits:
Built-in cluster failover for high availability Rolling patches
for single instance database Proactive migration / failover of the
instance Live migration of instances across servers Online upgrade
to RAC
The rolling upgrade is really useful. Upgrade to the OS, and
Database can be done without any downtime unless upgrade requires
some scripts to be run against the database. With RAC One Node, the
DBAs and Sys admins can be proactive and migrate/failover the
instance to another node to perform any critical maintenance
activity.
What it's not suited for
According to me the RAC one node is not a viable or recommended
solution in the following scenarios:
To load balance unlike regular RAC A true high availability
solution As a DR solution; Data guard best suits the bill For
mission critical applications
Cost
It is definitely not FREE. Oracle has priced RAC one at par
withActive Data Guard. The RAC One node is priced separately and
costs $10,000 per processor as against $23,000 for regular RAC. The
licensing cost is required for ONE node only (in a 2-node setup).
RAC one node is eligible for 10-day rule, allowing a customer to
migrate to another without the need to buy additional license up to
10-days in a calendar year. People arguing against paying a license
fee for resources they are not using will still lament.
Conclusion
I am still not very convinced on the usefulness of RAC one node.
I think customers invest in RAC for their mission critical
applications and achieving high availability and load balancing at
the same time. Those who dont go for RAC rely on Data Guard and now
with 11g, on Active Data Guard. So dont see a huge requirement for
RAC One except seamless failover within a data center. The
licensing is a bit disappointing; they are making clients pay $10
K. Moreover RAC is free withStandard editionthough one doesnt
getenterprise featuresand limited to 4 CPU sockets only. So,
thinking RAC One will be popular among customers who are currently
using standard edition and want to switch to enterprise will be
wrong. However, this is still a very new feature and as more people
adopt it, we will get more clarity on its usability. I am planning
to do a POC on it and would publish the installation steps and any
findings (goods things and not so good things) of my POC.
A cluster comprises of multiple interconnected servers or
computers that appear as if they are one single server to end users
and applications.What is RAC ?RAC stands for Real Application
Clusters. It allows multiple nodes in a clustered system to mount
and open a single database that resides on shared disk storage.
Should a single system fail (node), the database service will still
be available on the remaining nodes.In RAC database, comprises of
multiple instances, however there will be only one database.A
non-RAC database is only available on a single system. If that
system fails, the database service will be down (single point of
failure).Oracle Database 10g Real Application Clusters (RAC)
enables the clustering of the Oracle Database. A RAC database
comprises of multiple instances residing on different computers to
access a common database residing on shared storage.Why Real
Application Cluster ?The basic principle behind the Real
Application Cluster is greater throughput and scalability due to
the combined power of multiple instances running on multiple
serversReal Application Cluster provides high availability and
scalability for all application types. The RAC infrastructure is
also a key component for implementing the Oracle enterprise grid
computing architecture. Multiple instances access a single physical
database prevents the server from being a single point of failure.
Combining smaller servers into a cluster can be achieved to create
scalable environment that support mission critical business
applications.Real application Cluster uses Oracle Cluster ware for
the infrastructure to bind multiple servers so that they operate as
a single system.Oracle Clusterware is a portable cluster management
solution that is integrated with the Oracle database. The Oracle
Clusterware is also a required component for using Real application
Cluster.In Oracle Database 10g, Oracle Provides you with an
integrated software solution that addresses cluster management,
event management, application management, connection management,
storage management, load balancing and availability. These
capabilities are addressed while hiding the complexity through
simple-to-use management tools and automation.Oracle Real
Application Clusters 10g provides an integrated cluster ware layer
that delivers a complete environment for applications.Oracle Real
Application Cluster (RAC) uses Oracle Clusterware for the
infrastructure to bind multiple servers so that they operate as a
single systemMain Components of Oracle Real Application Cluster
10gIt comprises of two main components1. Oracle Clusterware2.
Oracle SoftwareIn RAC 10g Clusterware is called CRS layer which
resides below Oracle software Layer. Second layer is the Oracle
software itself.Oracle Real Application Cluster (RAC) is the Oracle
Database option that provides a single system image for multiple
servers to access one Oracle database. In RAC, each Oracle instance
usually runs on a separate server.Oracle Clusterware is software,
enables the servers to be bound together to operate as if they were
one server. The Oracle Clusterware comprises of two clusterware
components: a voting disk to record node membership information and
the Oracle Cluster Registry (OCR) to record cluster configuration
information. In Oracle Clusterware each node is connected to a
private network by way of a private interconnect.The Oracle
Clusterware comprises several background processes that facilitate
cluster operations such as Cluster Synchronization Service (CSS)
and Event Management (EVM).What are the Real Application Cluster
Main Processes ?The main processes involved in Oracle RAC are
primarily used to maintain database coherency among each instance.
They manage what is called the global resources. LMON: Global
Enqueue Service Monitor LMD0: Global Enqueue Service Daemon LMSx:
Global Cache Service Processes. Where x can range from 0 to j LCK0:
Lock Process DIAG : Diagnostibility ProcessThere are several tools
that are used to manage the various resources available on the
cluster at a global level. Some of the tools used are Server
Control (SRVCTL), DBCA and Enterprise Manager.Oracle Clusterware is
a portable cluster management solution that is integrated with the
Oracle database. The Oracle Clusterware enables you to create a
clustered pool of storage to be used by any combination of
single-instance and RAC databases.Oracle Clusterware is the only
clusterware that you need for most platforms on which RAC operates.
You can also use clusterware from other vendors if the clusterware
is certified for RAC.The combined processing power of the multiple
servers can provide greater throughput and scalability than is
available from a single server. RAC is the Oracle Database option
that provides a single system image for multiple servers to access
one Oracle database. In RAC, each Oracle instance runs on a
separate server.RAC is a unique technology that provides high
availability and scalability for all application types. The RAC
infrastructure is also a key component for implementing the Oracle
enterprise grid computing architecture. Having multiple instances
access a single database prevents the server from being a single
point of failure. RAC enables you to combine smaller commodity
servers into a cluster to create scalable environments that support
mission critical business applicationsWhat are the Storage
Principles for RAC Software and CRS ?The Oracle Software 10g Real
Application Clusters installation is a two-phase installation in
the first Phase, You install CRS. In the second phase, you install
the Oracle Database software with RAC components and create a
cluster database.The oracle home that you use for the CRS software
must be different from the one that is used for the RAC Software.
CRS and RAC software is installed on cluster shared
storage.Note:-Cluster Software and Oracle software is usually
installed on a regular file system that is local to each node. This
permits online patch upgrades without shutting down the database
and also eliminates the software as a single point of failure.Do
you need special hardware to run RAC ?RAC requires the following
hardware components: A dedicated network interconnect - might be as
simple as a fast network connection between nodes; and a shared
disk subsystem.RAC and Shared Storage Technologies1. Supported
shared storage for Oracle grids: Network attached Storage Storage
Area Network2. Supported file systems for Oracle grids:- Raw
Volumes Cluster file system ASM
Oracle Real Application Cluster & High AvailabilityOracle
Real Application Cluster (RAC) is a cluster database. Compared to
traditional shared-nothing architecture of single instance
database, oracle Real Application Cluster has shared cache
architecture and also shared-disk approaches to provide highly
available and scalable solution for any business applications.One
of the key components of Oracle enterprise grid architecture by
oracle is the Real application Cluster (RAC)Oracle RAC is a unique
technology that provides high availability and scalability for all
application types. The Oracle RAC infrastructure is also a key
component for implementing the Oracle enterprise grid computing
architecture. Having multiple instances access a single database
prevents the server from being a single point of failure. Oracle
RAC enables you to combine smaller commodity servers into a cluster
to create scalable environments that support mission critical
business applications. Applications that you deploy on Oracle RAC
databases can operate without code changes.Unique feature called
Fast Application Notification (FAN) configuration in oracle Real
application cluster helps load balancing when available service
status change. Available service status can change during a
scheduled outage for patching or any other regular maintenance
task. Service status can also change due to unexpected faults such
as node reboot, database service unavailability etc.Oracle Real
Application Cluster has some cost effective options while scaling
up or scaling down your size of application usage volume. Olden
days normally businesses have to pre-plan the scale of their
application few years in advance. Though they did not made use of
the full capacity at the initial stage of deployment, the hardware
had to be in place in anticipation of high growth in the future.
This kind of planning did add additional cost to Businesses in
planning their hardware and also software licensing. In case of
miscalculation of capacity, did cost further cost in moving their
application to higher capacity servers in later stages. Once the
application is deployed and up and running and if you decide to
migrate to different server, it does cost additional overhead in
terms of building servers, outages, manpower etc etc. So migrating
the application is not an easy exercise and if your business is
cost conscious (All business are cost conscious I guess) this type
of migration related to scaling up or scaling down should be
avoided.So what is the cost effective solution to avoid these
overheads? That is where Oracle Real application Cluster comes to
your rescue.Why Oracle Real application Cluster?To accommodate
unplanned, unanticipated growth in any business application, oracle
real application cluster can be built from standard,
commodity-priced processors, with standard network and storage
components. Since oracle Real application Cluster is built on very
foundation of Grid computing, when you require more processing
power or wish to scale up, simply add another similar commodity
priced server. Adding new server doesnt require you to bring down
the database and this can be done without interrupting the service
with users still accessing the database. Oracle Real application
cluster supports up to 100 nodes in any given Cluster
configuration.If your business decides that you need to scale down
your application for whatever reason, you could de-commission some
of the servers, without bringing down your database and while users
still accessing the database without any interruption.What is a
Cluster?A cluster comprises of multiple interconnected servers or
computers that appear as if they are one single server to end users
and applications.Oracle Database 10g Real Application Clusters
(RAC) enables the clustering of the Oracle Database. A RAC database
comprises of multiple instances residing on different computers to
access a common database residing on shared storage.The basic
principle behind the Real Application Cluster is greater throughput
and scalability due to the combined power of multiple instances
running on multiple serversSome of the main benefits of Oracle Real
Application Cluster (RAC) Scalability:Service capacity can be
expanded simply by adding servers to existing cluster. Availability
round the clock (24/7):Zero downtime for database applications.
Relatively lower computing cost:Cost can be relatively reduced by
using low-cost commodity hardware. Grid Computing:Very foundation
of Oracle Grid computing is oracle Real Application Cluster
(RAC.Main Components of Oracle Real Application Cluster 10g.It
comprises of two main components1. Oracle Clusterware2. Oracle
SoftwareIn RAC10g Clusterware is called CRS layer which resides
below Oracle software layer. Second layer is the Oracle software
itselfRAC is the Oracle Database option that provides a single
system image for multiple servers to access one Oracle database. In
oracle Real application cluster, each Oracle instance usually runs
on a separate server.However when it comes to managing and looking
after your production Real application Cluster, it may not be
practical to find the commands you needed in the right time and
possibly you dont want to keep searching for the right command and
tips when you have a major production issue and the users cannot
access the database.This book "Oracle Real Application Cluster
Field DBA Admin Handbook" describes how to administer the Oracle
Clusterware and Oracle Real Application Clusters (Oracle RAC)
architecture and provides an overview of these products. Describes
services and storage and how to use RAC scalability features to add
and delete instances and nodes in RAC environments.This book
"Oracle Real Application Cluster Field DBA Admin Handbook" also
describes, how to use the Server Control (SRVCTL) utility to start
and stop the database and instances, manage configuration
information, and to delete/add or move instances and
servicesTroubleshooting section describes how to interpret the
content of various RAC-specific log files, search on Metalink and
also useful reference section with relevant Metalink Document
reference and WeblinksStorage in Oracle Real Application
ClustersStorage for Real application cluster databases must be
shared. In other words, datafiles must reside on cluster file
system, shared raw devices or Automatic Storage Management (ASM)
disk group.The shared storage must include datafiles, undo
tablespaces for each instance and also online redo log files. It is
highly recommended by oracle to use server parameter file spfile
(SPFILE) instead of parameter file (PFILE).Shared storage
Technologies and RAC1. Supported shared storage for Oracle grids
Network attached Storage Storage area Network2. Supported file
systems for Oracle grids Raw Volumes Cluster file system ASMStorage
Area Network (SAN) represents the evolution of data storage
technology. Traditionally, on client server systems, data was
stored on devices either inside or directly attached to the server.
Next in the evolution scale came Network Attached Storage (NAS)
that took the storage devices away from the server and connected
them directly to the network.In RAC deployment, choosing the
appropriate file system is critical. Because traditional file
systems do not support simultaneous mounting by more than one
system, you must store files in either raw volumes without any file
system, or on a file system that supports concurrent access by
multiple systems.Oracle Cluster File systemsOracle cluster file
system (OCFS) is a shared file system designed specially for oracle
Real Application ClusterAutomatic Storage Management (ASM)
Automatic and high-performance cluster file system Manages Oracle
Database files Data spread across disks to balance load Integrated
mirroring across disks Solves many storage management ChallengesThe
automatic storage management(ASM) is a new feature in oracle
Database 10g. It provides a vertical integration of the file system
and the volume manager that is specially built for Oracle Database
filesThe ASM distributes I/O load across all available resources to
optimize performance while removing the need for manual I/O tuning.
ASM facilitates management of dynamic database environment by
allowing DBAs to increase the database size without having to shut
down the database to adjust the storage allocations.The ASM can
maintain redundant copies of data to provide fault
tolerance.Note:-ASM is the oracles strategic and stated direction
as to where oracle database files should be stored. However OCFS
will continue to be developed and supported for those who are using
it.Comparison between RAW or CFS Using CFS1. Simple management2.
Use of OMF with RAC3. Single Oracle Software installation4.
Autoextend Using raw1. Performance2. Use when CFS not available3.
Cannot be used for archive log filesYou can use a cluster file
system or place files on raw devices.Cluster file systems provide
the following advantages:- Greatly simplify the installation and
administration of RAC Use of Oracle Managed Files with RAC Single
Oracle Software installation Autoextend enabled on oracle data
files Uniform accessibility to archive logs in case of physical
node failureRAW devices Implications Raw devices are always used
when CFS is not available or not supported by oracle Raw devices
offer best performance without any intermediate layer between
oracle and the diskWhat is Automatic Storage ManagementAutomatic
storage management (ASM) is a new feature in Oracle Database 10g
from oracle . It integrates file system and the Logical Volume
Manager (LVM) . In ASM Volume Manager is specifically built for
Oracle database files. The ASM can provide management for single
SMP machines or across multiple nodes of a cluster for Oracle Real
Application Clusters support.Automatic Storage Management (ASM)
simplifies administration of Oracle related files by allowing the
administrator to reference disk groups rather than individual disks
and files, which are managed by ASM.Manual I/O tuning can be
eliminated while ASM distributes input/output (I/O) load across all
available resources to optimize performance while removing the need
for manual I/O tuning.The ASM has the flexibility of maintaining
redundant copies of data to provide fault tolerance, or it can be
built on top of vendor-supplied reliable storage mechanisms. Data
management in ASM is basically done by choosing the desired
reliability and performance characteristics for classes of data
rather than with human interaction of per-file basis.Automated
storage management gives the time to DBAs by increasing their
ability to manage larger databases and more of them with increased
efficiency.Automatic Storage Management (ASM) is a feature in
Oracle Database 10g/11g that provides the database administrator
with a simple storage management interface that is consistent
across all server and storage platforms. ASM provides the
performance of async I/O with the easy management of a file
system.Some of the Key features of ASM Stripes files rather than
logical volumes Enables online disk reconfiguration and dynamic
rebalancing Provides adjustable rebalancing speed Provides file
based redundancy Supports only Oracle files Its cluster awareWhy
ASM ?Some of the storage management features with ASM include
Striping Mirroring Asynchronous I/O Direct I/O SAME and Load
Balancing Is automatically installed as part of the base code
setASM includes striping and mirroring to provide balanced and
secure storage. The level of redundancy and the granularity of the
striping can be controlled using templates. The new ASM
functionality can be used in combination with existing raw and
cooked file systems, along with OMF and manually managed
files.Direct I/OBy making use of Direct I/O, higher cache hit ratio
can be achieved. Buffered I/O uses most important resources like
CPU and memory. In case of buffered I/O Oracle blocks are cached
both in the SGA and in the file system buffer cache.Buffered I/O
fills up the file system cache with Oracle Data, where as using the
Direct I/O allows non-Oracle data to be cached in the file system
much more efficiently.Key Features and Benefits of ASMThe ASM
functionality is controlled by an ASM instance.The main components
of ASM are disk groups, each of which comprise of several physical
disks that are controlled as a single unit. The physical disks are
known as ASM disks, while the files that reside on the disks are
know as ASM files.The ASM divides a file into pieces and spreads
them evenly across all the disks. The ASM uses an index technique
to track the placement of each piece. Traditional striping
techniques use mathematical functions to stripe complete logical
volumes. The ASM includes mirroring protection without the need to
purchase a third-party Logical Volume Manager. One unique advantage
of ASM is that mirroring is applied on file basis, rather than on a
volume basis. Therefore, the same disk group can contain a
combination of files protected by mirroring, or not protected at
all.The ASM supports data files, log files, control files, archive
logs, Recovery Manager (RMAN) backup sets, and other Oracle
database file types. The ASM supports Real Application Clusters
(RAC) and eliminates the need for a cluster Logical Volume Manager
or a cluster file system.Note:-ASM is shipped with the database and
available as part of base code set and there is no need to go
through a separate installation in the custom tree installation. It
is available in both the Enterprise Edition and Standard Edition
installations.One of the flexible feature of ASM is it does not
eliminate any existing database functionality which uses non ASM
files. Existing database are able to operate as they always have
been. New files may be created as ASM files, while existing ones
are administered in the old way or can be migrated to ASM.In ASM,
at the top of the new hierarchy, you can find what are called ASM
disk groups. Any single ASM file is contained in only one disk
group. However, a disk group may contain files belonging to several
databases, and a single database may use storage from multiple disk
groups.ASM files are always spread across all ASM disks in the disk
group.The ASM disks are partitioned in allocation units (AU) of on
megabyte each. An AU is the smallest contiguous disk space that ASM
allocates. The ASM does not allow physical blocks to split across
AUs.ASM General ArchitectureTo use ASM, you must start a special
instance called an ASM instance before you start your database
instance.ASM instances manage the metadata needed to make ASM files
available to ordinary database instances. Both ASM instances and
database instances have access to a common set of disks call disk
group. Database instances access contents of ASM files directly,
communicating with an ASM instance only to get information about
the layout of these files.An ASM instance is like any other
database instance except it contains two new background processes.
First one coordinates rebalance activity for disk groups and it is
called RBAL. The second one performs the actual rebalance activity
for AU movements. At any given time there can be many of these, and
they are called ARB0, ARB1, and so on. An ASM instance also has
most of the same background processes as a ordinary database
instance (SMON, PMON, LGWR, and so on.)Each database instance using
ASM has two new background processes called ASMB and RBAL. RABL
performs global opens of the disks in the disk groups. At database
instance startup, ASMB connects as a foreground process into the
ASM instance. All communication between the database and ASM
instances is performed via this bridge. This includes physical file
changes such as data file creation and deletion. Over this
connection, periodic messages are exchanged to update statistics
and to verify that both instances are healthyIt is quite possible
to cluster ASM instances and run them as RAC, using the existing
Global Cache Services (GCS) infrastructure. There is one ASM
instance per node on a cluster.Storage in Oracle Real Application
ClustersStorage for RAC databases must be shared. In other words,
datafiles must reside in an Automatic Storage Management (ASM) disk
group, on a cluster file system, or on shared raw devices. This
must include space for an undo tablespace for each instance if you
are using automatic undo management. Additionally, for each
instance you must create at least two redo log files that reside on
shared storage. Oracle recommends, you can use one shared server
parameter file (SPFILE) with instance-specific entries. Or you can
use a local file system to store client-side parameter files
(PFILEs).If you do not use ASM, if your platform does not support a
cluster file system, or if you do not want to use a cluster file
system to store datafiles, then create additional raw devices as
described in your platform-specific Oracle Real Application
Clusters installation and configuration guide.Automatic Storage
Management in Real Application ClustersASM automatically optimizes
storage to maximize performance by managing the storage
configuration across the disks. ASM does this by evenly
distributing the storage load across all of the available storage
within your cluster database environment. ASM partitions your total
disk space requirements into uniformly sized units across all disks
in a disk group. ASM can also automatically mirror data to prevent
data loss. Due to these added features, ASM significantly reduces
administrative overhead.As in single-instance Oracle databases, To
use ASM in RAC, select ASM as your storage option when you create
your database with the Database Configuration Assistant
(DBCA).Note:-using ASM in RAC does not require I/O tuning.Automatic
Storage Management Components in RACWhen you create your database,
Oracle creates one ASM instance on each node in your RAC
environment if one does not already exist. Each ASM instance has
either an SPFILE or PFILE type parameter file.The shared disk
requirement is the only substantial difference between using ASM in
a RAC database compared to using it in a single-instance Oracle
database. ASM automatically re-balances the storage load after you
add or delete a disk or disk group.In a cluster, each ASM instance
manages its node's metadata updates to the disk groups. In
addition, each ASM instance coordinates disk group metadata with
other nodes in the cluster. As in single-instance Oracle databases,
you can use Enterprise Manager, DBCA, SQL*Plus, and the Server
Control Utility (SRVCTL) to administer disk groups for ASM in
RACAutomatic Storage ManagementASM automatically optimizes storage
to maximize performance by rebalancing the storage configuration
across the disks that ASM manages. ASM spreads the storage load
across all available storage within your cluster database
environment for optimal performance. ASM partitions your total disk
space into uniformly sized units across all disks in a disk
group.ASM functionality is controlled by an ASM instance.The main
components of ASM are disk groups, each of which comprise of
several physical disks that are controlled as a single unit. The
physical disks are known as ASM disks, while the files that reside
on the disks are know as ASM files.What is a raw device ?A raw
device is a disk drive that does not yet have a file system set up.
Raw devices are used for Real Application Clusters since they
enable the sharing of disks.The term raw devices applies to the
character oriented disk device files (as opposed to the block
oriented ones) normally found in /dev. These device files are a
part of the interface between the hardware disks and the UNIX
system software.Raw devices are character devices. A utility called
raw can be used to bind a raw device to an existing block device.
These "existing block devices" may be disks or cdroms/dvds.Raw
Partition:A raw partition is a portion of a physical disk that is
accessed at the lowest possible level. A raw partition is created
when an extended partition is created and logical partitions are
assigned to it without any formatting. Once formatting is complete,
it is called cooked partitionSCSI, SAN and NAS, iSCSIAlthough not
directly related to CFS and raw devices questions arise around the
storage technologies being used.SCSI:Disk drives are connected
individually to the host machine by small computer system
interfaces (SCSI) through one of a number of disk
controllers.SAN:Storage Area Network is a shared dedicated
high-speed network connecting storage elements and the backend of
the servers.NAS:Network Attached Storage is a special purpose
server with its own embedded software that offers cross platform
file sharing across the network.iSCSI:Another form of network
attached storage that communicates in block mode over Ethernet
(Gigabit Ethernet) to special storage subsystems. Like NFS attached
storage, iSCSI uses standard hardware and software to communicate -
although a private network is recommended. Because it operates in
block mode, use of iSCSI with RAC requires either a cluster file
system or use of raw volumes.Raw devices suitable for complex
applications like Database Management Systems that typically do
their own caching because, raw device offers a more "direct" route
to the physical device and allows an application more control over
the timing of IO to that physical device. A raw device can be bound
to an existing block device (for example a disk) and be used to
perform "raw" IO with that existing block device. Such "raw" IO
bypasses the caching that is normally associated with block
devicesIn most UNIX systems it is a performance advantage to use
raw device files for data storage. By using raw devices, the UNIX
file system is bypassed and the operating system is able to perform
more effective I/O.Since file size is fixed by the size of the
partition, because of this, file size is constrained by the size of
the partition. If the partition becomes full, the raw device file
must be moved to a larger partition. In the worst case, the disk
must be reformatted in order to create a larger partition.What is a
Cluster File system (CFS) ?A cluster file system (CFS) is a file
system that may be accessed (read and write) by all members in a
cluster at the same time. This implies that all members of a
cluster have the same view.If your platform supports an Oracle
certified cluster file system, you can store the files that Real
Application Clusters requires directly on the cluster file
system.Aclustered file systemis a file system which is
simultaneously mounted on multiple servers. There are several
approaches to clustering, most of which do not employ a clustered
file system. While many computer clusters don't use clustered file
systems, unless servers are underpinned by a clustered file system
the complexity of the underlying storage environment increases as
servers are added.Distributed file system- the generic term for a
client/server or "network" file system where the data isn't locally
attached to a host.Global file system- this refers to the
namespace, so that all files have the same name and path name when
viewed from all hosts. This obviously makes it easy to share data
across machines and users in different parts of the
organization.OCFS2 (Oracle Cluster File System 2) is a free, open
source, general-purpose, extent-based clustered file system which
Oracle developed and contributed to the Linux community, and
accepted into Linux kernel 2.6.16.OCFS2 provides an open source,
enterprise-class alternative to proprietary cluster file systems,
and provides both high performance and high availability. OCFS2
provides local file system semantics and it can be used with any
application. Cluster-aware applications can leverage parallel I/O
for higher performance, and other applications can make use of the
file system to provide a fail-over setup to increase
availability.Cluster file system- a distributed file system that is
not a single server with a set of clients, but instead a cluster of
servers that all work together to provide high performance service
to their clients. To the clients the cluster is transparent - it is
just "the file system", but the file system software deals with
distributing requests to elements of the storage
cluster.Shared-disk cluster file systemThe most common type of
clustered file system is the shared disk file system, in which two
or more servers are connected to a single shared storage subsystem,
such as a stand-alone RAID array or SAN.Symmetric file system- A
symmetric file system is one in which the clients also run the
metadata manager code; that is, all nodes understand the disk
structures.Asymmetric file system- An asymmetric file system is one
in which there are one or more dedicated metadata managers that
maintain the file system and its associated disk
structures.Shared-nothing clustered file systemAnother clustered
file system approach is to have each node use its own local
storage, and communicate data changes to other nodes via some
network or bus. In this case disks are not shared amongst nodes,
but are instead dedicated to a single node and made readable and
writable to other serversParallel file system- file systems with
support for parallel applications, all nodes may be accessing the
same files at the same time, concurrently reading and writing. Data
for a single file is striped across multiple storage nodes to
provide scalable performance to individual files.SAN file system-
These provide a way for hosts to share Fibre Channel storage, which
is traditionally carved into private chunks bound to different
hosts. To provide sharing, a block-level metadata manager controls
access to different SAN devices. A SAN File system mounts storage
natively in only one node, but connects all nodes to that storage
and distributes block addresses to other nodes. Scalability is
often an issue because blocks are a low-level way to share data
placing a big burden on the metadata managers and requiring large
network transactions in order to access data.Oracle
ClusterwareOracle Clusterware is a portable cluster management
solution that is integrated with Oracle Database. Oracle Real
Application Clusters (Oracle RAC) uses Oracle clusterware as the
infrastructure that binds together multiple nodes which operate as
a single server. Oracle Clusterware includes a high availability
framework for managing any application that runs on your cluster.
Voting disk and the OCR is created on shared storage during Oracle
Clusterware installation process.The Oracle Clusterware includes
two important components: the voting disk and the Oracle Cluster
Registry (OCR). The voting disk is a file that manages information
about node membership and the OCR is a file that manages cluster
and Oracle Real Application Clusters (Oracle RAC) database
configuration information.1. Voting Disk: -Manages cluster
membership by way of a health check and arbitrates cluster
ownership among the instances in case of network failures. RAC uses
the voting disk to determine which instances are members of a
cluster. The voting disk must reside on shared disk. For high
availability, Oracle recommends that you have multiple voting
disks. The Oracle Clusterware enables multiple voting
disks.Note:-If you define a single voting disk, then you should use
external mirroring to provide redundancy.2. OCR File :-Cluster
configuration information is maintained in Oracle Cluster Registry
file. OCR relies on a distributed shared-cache architecture for
optimizing queries against the cluster repository. Each node in the
cluster maintains an in-memory copy of OCR, along with an OCR
process that accesses its OCR cache.When OCR client application
needs to update the OCR, they communicate through their local OCR
process to the OCR process that is performing input/output (I/O)
for writing to the repository on disk.The OCR client applications
are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manger
(EM), Database Configuration Assistant (DBCA), Database Upgrade
Assistant(DBUA), NetCA and Virtual Internet Protocol Configuration
assistant (VIPCA). OCR also maintains dependency and status
information for application resources defined within CRS,
specifically databases, instances, services and node
applications.Note:-The name of the configuration file is ocr.loc
and the configuration file variable is ocrconfig.locOracle Cluster
Registry (OCR) :-Maintains cluster configuration information as
well as configuration information about any cluster database within
the cluster. The OCR also manages information about processes that
Oracle Clusterware controls. The OCR stores configuration
information in a series of key-value pairs within a directory tree
structure. The OCR must reside on shared disk that is accessible by
all of the nodes in your cluster. The Oracle Clusterware can
multiplex the OCR and Oracle recommends that you use this feature
to ensure cluster high availability.Note:-You can replace a failed
OCR online, and you can update the OCR through supported APIs such
as Enterprise Manager, the Server Control Utility (SRVCTL), or the
Database Configuration Assistant (DBCAsrvctl
srvctl is the oracle recommended tool for DBAs to use to
interact with CRS and the cluster registry. There are number of
tools which can be used to interface with the cluster registry and
CRS, however they are undocumented and intended only for use by
Oracle Support. srvctl is well documented tool and its also easy to
use.srvctl must be run from the $ORACLE_HOME of the RAC you are
administering.The basic format of a srvctl command
issrvctl[options]where command is one
ofenable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|configand
the target, or object, can be a database, instance, service, ASM
instance, or the nodeapps.options extends the use of preceding
command, target combinations.To see the online command syntax and
options for each SRVCTL command, enter:srvctl verb noun -hSRVCTL
for Administering Oracle Real Application ClustersThe Server
Control (SRVCTL) utility is installed on each node by default. You
can use SRVCTL to start and stop the database and instances, manage
configuration information, and to move or remove instances and
services. You can also use SRVCTL to add services. SRVCTL also
manages configuration information.Some SRVCTL operations store
configuration information in the Oracle Cluster Registry (OCR).
SRVCTL performs other operations, such as starting and stopping
instances, by sending requests to the Oracle Clusterware process
(CRSD), which then starts or stops the Oracle Clusterware
resources.Some of the the srvctl commands are summarized in this
table:Srvctl CommandsCommand Descriptionsrvctl add :-Adds database,
instance, service and nodeappssrvctl remove:-Removes database,
instance, service and nodeappssrvctl modify :-Modifies database,
instance, service and nodeappssrvctl disable:-Disables database,
database instance, asm instance and servicesrvctl enable:-Enables
database, database instance, asm instance and servicesrvctl start
:-Starts database, database instance, asm instance, service and
nodeappssrvctl stop :-Stops database, database instance, asm
instance, service and nodeappssrvctl status:-Display status of
database, database instance, asm instance, service and nodeappsAs
you can see, srvctl is a powerful utility. srvctl -help displays a
basic usage message, and srvctl -h displays full usage information
for every possible srvctl command.To see help for all SRVCTL
commands, enter following command from the command line:srvctl hTo
see command syntax and list of options fir each SRVCTL
commandsrvctl command object -hTo see SRVCTL version numbersrvctl
-VFor example:To add named instances to a database:> srvctl add
instance -d racdb -i racinst1 -n mynode1 > srvctl add instance
-d racdb -i racinst2 -n mynode2 > srvctl add instance -d racdb
-i racinst3 -n mynode3For example, to display configured
databases:->srvctl config database -d RACDBwhere RACDB is the
name of the databaseTo stop database and all or named instances.
The syntax is:> srvctl stop database -d database_name [-o
stop_options] [-c connect_string]> srvctl stop instance -d
database_name -i instance_name [,instance_name_list] [-o
stop_options][-c connect_string]To stop the database all
instances:> srvctl stop database -d RACDBTo stop named
instances:> srvctl stop instance -d RACDB -i
racinst1Summary:-Few guidelines for using SRVCTL in Real
Application Clusters Always use SRVCTL from the Oracle_home of the
database that you are administering. Only run one SRVCTL command at
a time for each database, service, or other object, because SRVCTL
does not support concurrent executions of commands on the same
object. To change your oracle RAC database configuration, log in to
the database as the oracle user.srvctl most common and usefull
commandssrvctl start database d database-namesrvctl stop database d
database-namesrvctl start asm n node-namesrvctl stop asm n
node-namesrvctl start nodeapps n node-namesrvctl stop nodeapps n
node-namesrvctl status service -d-s