NEW: Oracle Real Application Clusters (RAC) and Oracle Clusterware 11g Release 2 Markus MichalewiczProduct Manager Oracle Clusterware
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracles products remains at the sole discretion of Oracle.
Agenda
Overview Easier InstallationSSH Setup, prerequisite checks, and FixUp-scriptsAutomatic cluster time synchronization configurationOCR & Voting Files can be stored in Oracle ASM Easier Management Policy-based and Role-separated Cluster ManagementOracle EM-based Resource and Cluster ManagementGrid Plug and Play (GPnP) and Grid Naming Service Single Client Access Name (SCAN) Summary
Dedicated silos are inefficient
Sized for peak load
Constrained performance
Difficult to scale
Expensive to manageDedicated StacksThe Traditional Data Center Expensive and Inefficient
Grid Computing Virtualize Pools and Resources
A virtualized single instance database
Delivers value of server virtualization to databases on physical serversServer consolidationOnline upgrade to RACStandardized deployment across all Oracle databasesBuilt-in cluster failover for high availabilityLive migration of instances across serversRolling patches for single instance databases
Oracle RAC One Node Better Virtualization for Databases
Oracle Grid Infrastructure The universal grid foundationStandardize infrastructure softwareEliminates the need for 3rd-party solutionsCombines Oracle Automatic Storage Management (ASM) & Oracle ClusterwareTypically used by System Administrators Includes:
Oracle ASMASM Cluster File System (ACFS)ACFS SnapshotsOracle ClusterwareCluster Health Manager
Oracle Database 11g Release 2 Lowering CapEx and OpEx using Oracle RAC Oracle RACOracle ASMOracle Grid Infrastructure
Easier Installation
New intelligent installer40% fewer steps to install Oracle Real Application Clusters and Oracle Grid Infra.Integrated Validation and Automation Nodes can be easily repurposedNodes can be dynamically added or removed from the clusterNetwork and storage information are read from profile and configured automaticallyNo need to manually prepare a node
Oracle Database 11g Release 2 Easier Grid Installation and Provisioning
Typical and Advanced Installation
Software Only Installation for Oracle Grid Infrastructure
Grid Naming Service (GNS) and Auto assignment of VIPs
SSH Setup, prerequisite checks, and FixUp-scripts
Automatic cluster time synchronization configuration
OCR & Voting Files can be stored in Oracle ASM 12453Easier Grid Installation 6
Secure Shell (SSH) Setup
CVU-based Prerequisite Checks
and FixUp-Scripts
Time synchronization between cluster nodes is crucial Typically, a central time server, accessed by NTP, is used to synchronize the time in the data center Oracle provides the Oracle CTSS as an alternative for cluster time synchronization CTSS runs in 2 ways:Observer mode: whenever NTP is installed on the system, CTSS only observesActive mode: time in cluster is synchronized against the CTSS master (node)
Automatic Cluster Time Synchronization Oracle Cluster Time Syncronization Service (CTSS)
OCR / Voting Files stored in Oracle ASM
OCR / Voting Files stored in Oracle ASM
Create ASM Disk Group
The OCR is managed like a datafile in ASM (new type)It adheres completely to the redundancy settings for the DGThe OCR Managed in Oracle ASM
Voting Files Managed in Oracle ASMUnlike the OCR, Voting Files areStored on distinguished ASM disks
ASM auto creates 1/3/5 Voting Files Based on Ext/Normal/High redundancy and on Failure Groups in the Disk GroupPer default there is one failure group per diskASM will enforce the required number of disksNew failure group type: Quorum Failgroup[GRID]> crsctl query css votedisk 1. 2 1212f9d6e85c4ff7bf80cc9e3f533cc1 (/dev/sdd5) [DATA] 2. 2 aafab95f9ef84f03bf6e26adc2a3b0e8 (/dev/sde5) [DATA] 3. 2 28dd4128f4a74f73bf8653dabd88c737 (/dev/sdd6) [DATA]Located 3 voting disk(s).
Easier Management
Oracle Enterprise Manager (EM) is able to manage the full stack, including Oracle ClusterwareManage and monitor clusterware componentsManage and monitor application resources New Grid Concepts:Server Pools Grid Plug and Play (GPnP)Grid Naming Service (GNS)Auto-Virtual IP assignment Single Client Access Name (SCAN)
Oracle Database 11g Release 2 Easier Grid Management
OCR & Voting Files can be stored in Oracle ASM
Clusterized Commands
Policy-based and Role-separated Cluster Management
Oracle EM-based Resource and Cluster Management
Grid Plug and Play (GPnP) and Grid Naming Service Single Client Access Name (SCAN)23451Easier Grid Management 6
New Grid Concept: Server Pools Foundation for a Dynamic Cluster PartitioningLogical division of a cluster into pools of servers. Hosts applications (which could be databases or applications) Why Use Server Pools?Easy allocation of resources to workload Easy management of Oracle RAC Just define instance requirements (# of nodes no fixed assignment) Facilitates Consolidation of Applications and Databases on Clusters
Policy-based management uses server pools toEnable dynamic capacity assignment when neededEnsure isolation where necessary (dedicated servers in a cluster) In order to guarantee:Applications get the required minimum resources (whenever possible)Applications do not take resources from more important applicationsResource management without policiesPolicy-based Cluster Management Ensure Isolation based on Server Pools
Resource management without policiesPolicy-based Cluster Management Ensure Isolation based on Server PoolsPolicy-based management uses server pools toEnable dynamic capacity assignment when neededEnsure isolation where necessary (dedicated servers in a cluster) In order to guarantee:Applications get the required minimum resources (whenever possible)Applications do not take resources from more important applications
A Server Pool is defined by 4 attributes: Server Pool NameMin specifies the minimum number of servers that should run in the server pool Max states the maximum number of servers that can run in the server pool. Imp importance specifies the relative importance between server pools. This parameter is of relevance at the time of the server assignment to server pools or when servers need to be re-shuffled in the cluster due to failures.
Enable Policy-based Cluster Management Define Server Pools using the appropriate Definition
Role-separated Cluster Management Addresses organizations with strict separation of duty Role-separated management is implemented in 2 ways:Vertically: Use a different user (groups) for each layer in the stackHorizontally: ACLs on server pools for policy-managed DBs / Apps. The default installation assumes no separation of duty
Oracle EM The New Cluster Mgmt Tool
Oracle EM Integrated Server Pool Mgmt
GPnP eliminates the need for a per node configurationIt is an underlying grid concept that enables the automation of operations in the clusterAllows nodes to be dynamically added or removed from the clusterProvides an easier management to build large clustersIt is the basis for the Grid Naming Service (GNS)
Technically, GPnP is based on an XML profileDefining node personality (e.g. cluster name, network classification)Created during installationUpdated with every relevant change (using oifcfg, crsctl)Stored in local files per home and in the OCRWallet protected
GPnP is apparent in things that you do not see and that you are not asked for (anymore).Grid Plug and Play (GPnP) Foundation for a Dynamic Cluster Management
Grid Naming Service (GNS) Dynamic Virtual IP and Naming The Grid Naming Service (GNS) allows dynamic name resolution in the cluster
The Cluster manages its own virtual IPsRemoves hard coded node informationNo VIPs need to be requested, if cluster changes
Enables nodes to be dynamically added or removed from the cluster
Defined in the DNS as a delegated domainMycluster.myco.comDHCP provides IPs inside delegated domain
Benefit: Reduced configuration for VIPs in the cluster Defined in the DNS as a delegated domain DNS delegates request to mycluster.myco.com to GNS Needs its own IP address (the GNS VIP)This is the only NAME IP assignment required in DNS All other VIPs, and SCAN-VIPs are defined in the GNS for a clusterDHCP is used for dynamic IP assignment Optional way of resolving addressesRequires novel configuration by DNS administrator Grid Naming Service (GNS) Steps to set up GNS
delegated cluster domain dynamic VIP assignmentGrid Naming Service Client Connectcorporate domain
delegated cluster domain dynamic VIP assignmentGrid Naming Service Client Connectcorporate domain
delegated cluster domain dynamic VIP assignment2Grid Naming Service Client Connectcorporate domain
delegated cluster domain dynamic VIP assignment32Grid Naming Service Client Connectcorporate domain
delegated cluster domain dynamic VIP assignment412Grid Naming Service Client Connectcorporate domain
delegated cluster domain dynamic VIP assignment512Grid Naming Service Client Connectcorporate domain
delegated cluster domain dynamic VIP assignment162Grid Naming Service Client Connectcorporate domain
Used by clients to connect to any database in the cluster Removes the requirement to change the client connection if cluster changes Load balances across the instances providing a service Provides failover between moved instancesSingle Client Access Name (SCAN) The New Database Cluster Alias
Requires a DNS entry or GNS to be usedIn DNS, SCAN is a single name defined to resolve to 3 IP-addresses:
Each cluster will have 3 SCAN-Listeners, combined with a SCAN-VIP defined as cluster resourcesThe SCAN VIP/LISTENER combination will failover to another node in the cluster, if the current node failsCluster Resources --------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node1ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE node2ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE node3Single Client Access Name Network Configuration for SCANclusterSCANname.example.com IN A 133.22.67.194IN A 133.22.67.193IN A 133.22.67.192
PMRAC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = nodeN)(PORT = 1521)) (CONNECT_DATA = ))Single Client Access Name Easier Client ConfigurationPMRAC = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = clusterSCANname)(PORT = 1521)) (CONNECT_DATA = ))Without SCAN (pre-11g Rel. 2) TNSNAMES has 1 entry per nodeWith every cluster change, all client TNSNAMES need to be changedWith SCAN only 1 entry per cluster is used, regardless of the # of nodes:
Application ServerConnection Load Balancing using SCAN
Application ServerConnection Load Balancing using SCAN
Summary
Shared Infrastructure Dedicated InfrastructureOracle RAC for HALower the cost of HA Lower the cost scalabilityOracle RAC for Scale outShared Cluster Shared Database Shared Storage Lower the infrastructure costs Improved utilization Storage consolidation Management efficiency (Shared DB)Lower the cost of deploymentsLower CAPEXLower OPEX
RACRACRACRACRACRACRACASASASASASASASEMRACRACRACRACRACRACRACEMStandardized Infrastructure Datacenter GridGRID for DBThe Evolution of the Grid Lowering the Cost of Database Deployments
QuestionsandAnswers
********************************1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*1 Client sends connection request using connect string with address2 Coporate DNS server recognizes that sub-domain and forwards the request to the GNS (if GNS translates the SCAN and sends the VIP addresses back to DNSDNS sends the VIP addresses back to the clientClient sends connection request to VIP address with port (SCAN LISTENER)SCAN LISTENER passes connection request to least loaded local listener
*********