<Insert Picture Here>
Cloud Infrastructure – Deploying an elastic and heterogeneous application with IBM system z running linux.
Paul Bramy – ORACLE Corporation – Oracle Integrated Solutions
Didier Wojciechowski – ORACLE Corporation – Oracle Integrated Solutions
2
<Insert Picture Here>
Agenda
• Brief introduction to Cloud – Main lessons to leverage Oracle Solutions on IBM System z running
Linux
• Main Oracle Database 11gR2 foundations for Cloud deployments
• Oracle Clusterware– SCAN– Multiple network
• Server pool management• Oracle Rac One Node
• Deploying an elastic and heterogeneous application– From Application to infrastructure
– Weblogic/Coherence/Java application– Oracle GRID foundations– Oracle Database features
3
NIST Definition of Cloud Computing
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model promotes availability and is composed of:
Source: NIST Definition of Cloud Computing v15
3 Service Models• SaaS• PaaS• IaaS
4 Deployment Models• Public Cloud• Private Cloud• Community Cloud• Hybrid Cloud
5 Essential Characteristics• Resource pooling• Rapid elasticity• On-demand self-service• Measured service• Broad network access
4
Cloud Computing and VirtualizationTop Priorities for CIOs
Source: Gartner. Leading in Times of Transition. The 2010 CIO Agenda
5
Customer Cloud Journey
Time
CloudMaturity Metering
& ChargeBack
HybridOperation
SharedResource
Pool
Standardisation
DatabaseApplication Sever
Servers
IT Capabilities
ConsolidationAutomation
Virtualisation
6
Heterogeneous Cloud Platform approach Global DataCenter vision
IaaSIaaS
Tech
nolo
gies
App
s
SaaSSaaS
PaaSPaaS
Infra
stru
ctur
e
Data CenterData Center
Additional information
(Prod : physical servers:
Number, Models and Characteristics
Storage capacity
Etc.. )
• ERP : Oracle APPs, Orthersolutions
• Number of users?
• Specific workloads?
• SLA requirements?
Current and future Environment
• Databases products?
• Middleware products?
• # of instances, types, Size of databases?
• # of middleware instances?
• Mission critical?
• End of life?
• Utilization?
• Legacy?
Data Center Infrastructure
• Number of locations?
• Distances?
• Specialized centers?
• Acquisitions?
Campus Remote
• Management products?
7
Database Cloud Architectures
Server• Enabled by server virtualization
• Each Database Service deployed in dedicated VMs
• VMs share a physical server• VM level elasticity
• Use with simple databases
Database• Enabled by RAC
• Multiple DBs share a server pool and/or OS
• Flexible Database Services• Fine grain service level elasticity
• Use with any database
Schema• Enabled by RAC
•Multiple schemas share same database
• Flexible Database Services• Fine grain service level elasticity
• Use with most databases
OSOSDB
OSOS
Hypervisor
DW
OS OSDB
Cluster
ERP DW
DB
ERP
OS OS
DB
Cluster
ERP DW
Storage Pool
DB
Sales Sales
Storage Pool Storage Pool
Database CloudInfrastructure Cloud
Sales CRM
DB DB DB
8
Database Cloud Architectures
• Reasons for adoption• There is not one Cloud model that fits a customer’s requirements• Customer wants to combine the benefits of multiple models
OSOS OSOSOSOS OSOSHypervisor
ClusterDB
DBDBDB
HRERP DW
Cluster
DBDB
HR
Sales
ERP DW
OSOSOSOS
Virtualization + ClusterizationDatabase & Schema
Storage Pool Storage Pool
SalesCRM
DB DB
99
Oracle Cloud Platform approachwith IBM infrastructure
AIXAIX
Oracle VM for x86
Operating Systems: Oracle Enterprise Linux
Database Grid: Oracle Clusterware, Database, RAC, ASM,Partitioning, DB Cache, Active Data Guard, Database Security
Application Grid: WebLogic Server, Coherence, Tuxedo, Golden Gate
Oracle Enterprise LinuxOracle Solaris
Z/VM
Oracle ApplicationsOracle ApplicationsSiebelSiebelJAVA J2EEApplicationsJAVA J2EEApplications
Fusion AppsFusion Apps
SolarisLinux / OEL* / ULK**AIXPower VM VMWARE
Servers
Provisioning, A
pplication Managem
ent (M
onitoring, Diagnostic, P
erformance)
and Application Testing (ATS
, RA
T)
IaaS
Storage
Paa
SS
aaS
Virt
ualiz
atio
n la
yers
IBM Power IBM System xIBM System z
IBM DS5xxx IBM DS8xxx XIV
Windows
* Oracle Enterprise Linux ** Unbreakable Enterprise Kernel for Oracle Linux
SVC
10
Power (p7-7xx)
VIO
–N
etw
ork
–D
isks
NPI
V
VIO
–N
etwork –
Disks N
PIV
VIO
–N
etwork –
Disks N
PIV
VIO
–N
etw
ork
–D
isks
NPI
V
CPU shared PoolCPU shared PoolCPU shared Pool
ASM 1
ASM DISK GROUP Single
Failure
Group
Single
Failure
Group
Replica
DatabaseASM DISK GROUP
RAC
Failure
Group
RAC
Failure
Group
Oracle
Clusterware
infrastructure
EMC DMXIBM DS5300
RAC 1 RAC 2
RAC 3 RAC 4
Single
ASM 2 ASM 3 ASM 4 ASM 5 ASM
Golden Gate
RAC database
OEM agent OEM agent
ACFS
Volume
Group
IBM SVC
infrastrucutre
Power (p7-7xx)
OS / Linux
( n VCPU)
z/VM z/VM z/OS
Task
Hypersocketnetwork
External vswitch network
Oracle ASM
OS / Linux
( n VCPU)
Oracle ASM
OS / Linux
( n VCPU)
OS / Linux
( n VCPU)
ASM DG Database 10gR2
Oracle Clusterware
ASM DG Database 11gR2
ASM DG Shared FRA
Disk SubsystemSAN
Oracle 11gR2 Grid InfrastructureOracle 11gR2 Grid InfrastructureUnified Clusterware and Storage management
ClusterWare
•A major part of Oracle’s Private Cloud•Integrated with Oracle Automatic Storage Management (ASM)•Foundation for the Oracle ASM Cluster File System (ACFS)•Foundation for Oracle Real Application Clusters (RAC)•An infrastructure for the management of all kind of applications
•Single Client Access Name•Multiple Network support•Server pool strategy
Storage (CloudFS)ASM
•Simplifies and automates storage management•Integrated cluster and single node framework•Dynamic Re-balancing•Flexible striping and mirroring•Optimal performance by default•Best availability and scalability•Manages ALL DB files and OCR/Voting
ACFS *General purpose scalable file system
Journaling, extent based, Single node and cluster
POSIX, X/OPEN file system solution
Windows file system
Accessible through NAS protocols (NFS, CIFS)
Leverages ASM technology for volume mgt
Integrated with Oracle Clusterware for cluster support
Integrated with Oracle system mgt tools
Oracle Cloud foundationswith IBM infrastructure including IBM System z running linux
Oracle Database Enterprise Oracle Database Enterprise EditonEditon
Grid, HA and OLTP•In-Memory Database Cache•Imporove OTLP performance•Online Application Upgrade
•Rolling cluster upgrades•Real Application Cluster •Advanced Compression
Data Warehousing, VLDB•Partitioning•Advance compression•OLAP•Data Mining•ETL•Connectors
Inbformation Management•Spatial•AConnectors•Gateways•Secure Enterprise Search•ETL•Connectors
Management•Diagnostics•Tuning•Configuration Management•Change Management•Real Application Testing
Security•Advanced Security Option•Label Security•Datrabase Vault•Audit Vault*• Data Masking•Total Recall
HA, DR and Active-Active ReplicationReal Application Clusters•24/7 availability—Provide continuous uptime for database applications
Highest Availability
On-demand flexible scalability
Lower computing costs
World record performance
RaC One Nodeunlocks the benefits of the database cloud for single instance databasesAutomated failure discovery
Immediate failover
Online application of critical/security patches
Online OS upgrades
Online storage upgrades
Online server replacement
Active DataGuardShip from memory
SYNC or ASYNC
Simple one way replication
Standby open read-only
Zero I/O overhead, near-zero primary performance impact
Standby database is exact physical replica
No data type or other restrictions
Integrated with Oracle kernel
Golden Gate *Read and ship from redo logs
ASYNC only
Advanced, multimaster replication*Target open read-write
I/O overhead and capture processing on primary
Replica is logical copy maintained using SQL
Data type and other restrictions
External to Oracle Database
Oracle Application GRIDOracle Application GRID
WebLogic Server•JMS performance•Clustering, RAC integration•Rolling cluster upgrades•Overload protection•Server, service migration•WAN / MAN clusters for DR
Coherence•Dynamic scale-out of Web apps•Unparalleled performance, availability•Protect against backend failure
Tuxedo•Scale-out•InstrumentationDeploys to commodity H/W predictable behavior
Jrockit *•Code optimization•Diagnostics and tuning•Predictable performance
11
<Insert Picture Here>
Agenda
• Brief introduction to Cloud – Main lessons to leverage Oracle Solutions on IBM System z running
Linux
• Main Oracle Database 11gR2 foundation for Cloud deployments
• Oracle Clusterware– SCAN– Multiple network
• Server pool management• Oracle Rac One Node
• Deploying an elastic and heterogeneous application– From Application to infrastructure
– Weblogic/Coherence/Java application– Oracle GRID foundations– Oracle Database features
12
• Used by clients to connect to any database in the cluster
• Removes the requirement to change the client connection if cluster changes
• Load balances across the instances providing a service
• Provides failover between “moved instances”
ClusterSCANname
Single Client Access Name (SCAN) The New Database Cluster Alias
OS / Linux
( n VCPU)
z/VM z/VM z/OS
Task
Hypersocketnetwork
External vswitchnetwork
Oracle ASM
OS / Linux
( n VCPU)
Oracle ASM
OS / Linux
( n VCPU)
OS / Linux
( n VCPU) Oracle Clusterware
13
Single Client Access Name (SCAN) Network Requirements
Two options:1. Define the SCAN in your corporate DNS (Domain Name Service)
2. Use the Grid Naming Service (GNS) and the SCAN will be created during cluster configuration
Note: For a test environment, you can install with SCAN resolving to one IP in /etc/hosts if no DNS
sales1-scan.example.com IN A 133.22.67.194
IN A 133.22.67.193
IN A 133.22.67.192
14
PMRAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
…
(ADDRESS = (PROTOCOL = TCP)(HOST = nodeN)(PORT = 1521))
(CONNECT_DATA =
… ))
PMRAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = clusterSCANname)(PORT = 1521))
(CONNECT_DATA =
… ))
• Without SCAN (pre-11g Rel. 2) TNSNAMES has 1 entry per node• With every cluster change, all client TNSNAMES need to be changed
• With SCAN only 1 entry per cluster is used, regardless of the # of nodes:
Single Client Access Name (SCAN) Easier Client Configuration
15
• Requires a DNS entry or GNS to be used for full functionality• In DNS, SCAN is a single name defined to resolve to 3 IP-addresses:
• Each cluster will have 3 SCAN-Listeners,combined with a SCAN-VIP defined as cluster resources
• The SCAN VIP/LISTENER combination will failover to another node in the cluster, if the current node fails
Cluster Resources--------------------------------------------
ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node1
ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE node2
ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE node3
clusterSCANname.example.comIN A 133.22.67.194
IN A 133.22.67.193
IN A 133.22.67.192
Single Client Access Name (SCAN) Network configuration
16
NAME TYPE VALUE
-------------------------- ----------- ------------------------------
local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD
DRESS=(PROTOCOL=TCP)(HOST=133.
22.67.111)(PORT=1521))))
remote_listener string sales1-scan.example.com:1521
• Load balancing using SCAN is still based on these parameters: – local_listener– remote_listener
• Using an Oracle Database 11g Release 2, the following configuration will be the default for a newly, DBCA-created DB:
• Note the notation of the remote_listener for SCAN• More information: “Oracle Real Application Clusters 11g Release 2
Overview of SCAN” on http://www.oracle.com/goto/rac
Single Client Access Name (SCAN) How to parameterize load balancing
17
Single Client Access Name (SCAN) Oracle versions impact
Oracle client version Oracle DB version Comments
Oracle Database 11g Release 2
Oracle Database 11g Release 2 No change required
Oracle Database 11g Release 2
Pre-Oracle Database 11g Release 2
Add the SCAN VIPs as hosts to the REMOTE_LISTENER parameter.
Pre-Oracle Database 11g Release 2
Oracle Database 11g Release 2
Change the client TNSNAMES.orato
include the SCAN VIPs
Pre-Oracle Database 11g Release 2
Pre-Oracle Database 11g Release 2
No change required, since node VIPs can be used, but use of
SCAN is recommended.
18
• “Multiple Subnet Support” for the cluster was introduced with Oracle Clusterware 11g Release 2 • Creation of the respective VIP-resource is required first• Further application VIPs need to be created to support the applications ( Parallel access from z/OS and external
Application tiers )
Multiple Subnet Support in the Cluster
Linux guest VIP1
142.122.33.1
Linux guest VIP2
142.122.33.2
Linux guest VIP3
142.122.33.3
Linux guest VIP4
142.122.33.4Green network:
142.122.33.xvswitch
External network
Blue network:
192.168.2.x
Hypersocketnetwork
App VIP1
192.168.2.10
App VIP2
192.168.2.11
OS / Linux
( n VCPU)
z/VM z/VM z/OS
Task
External vswitchnetwork
Oracle ASM
OS / Linux
( n VCPU)
Oracle ASM
OS / Linux
( n VCPU)
OS / Linux
( n VCPU) Oracle Clusterware
19
Multiple Subnet Support in the Cluster
Node VIP1
142.122.33.1
Node VIP2
142.122.33.2
Node VIP3
142.122.33.3
Node VIP4
142.122.33.4Green network:
142.122.33.xora.net1.network
Blue network:
192.168.2.x
App VIP1
192.168.2.10
App VIP2
192.168.2.11
• In Oracle Clusterware 11g Release 2 the procedure was:
root> srvctl add vip -n rac1 -k 2 -A rac1-prisu/255.255.255.0root> srvctl config vip -n rac1
VIP exists.:rac1VIP exists.: /rac1-prisu/192.168.2.10/255.255.255.0VIP exists.:rac1VIP exists.: /rac1-vip/10.1.1.11/255.255.255.0/eth0
– [GRID]> crsctl stat res -t |grep networkora.net1.networkora.net2.network This network resource was create with the new VIP
20
Multiple Subnet Support in the Cluster
Node VIP1
142.122.33.1
Node VIP2
142.122.33.2
Node VIP3
142.122.33.3
Node VIP4
142.122.33.4
Green network:
142.122.33.xora.net1.network
Blue network:
192.168.2.x
App VIP1
192.168.2.10
App VIP2
192.168.2.11
• After the new network resource is created, the App VIPs get created:
– $GRID_HOME/bin/appvipcfg create -network=2 -ip 192.168.2.11 -vipname=appVIP2 -user=root
• The “network resource” = ora.net#.network was introduced with 11.2.0.1– It monitors the interface it is assigned to (e.g. /eth0)– Each network resource represents 1 subnet in the cluster– SCAN, listeners, VIPs (node and App VIPs) depend on the network resource.
21
Multiple Subnet Support in the Cluster
Node VIP1
142.122.33.1
Node VIP2
142.122.33.2
Node VIP3
142.122.33.3
Node VIP4
142.122.33.4
Green network:
142.122.33.xora.net1.network
Blue network:
192.168.2.x
App VIP1
192.168.2.10
App VIP2
192.168.2.11
• In 11.2.0.1 the network resource was implicitly managed.• In 11.2.0.2 the network resource is explicitly managed using SRVCTL
• [GRID]> srvctl add network -hAdds a network configuration to the Oracle Clusterware.
• [GRID]> srvctl modify network -hModifies a network configuration in the Oracle Clusterware.
• [GRID]> srvctl config network -hDisplays the configuration information for the networks
registered in the Oracle Clusterware.
22
Two Management Styles for Oracle RAC
• Administrator Managed– Specifically define where the database should run with a list of
servers– Define where services should run within the database
• Policy Managed– Define resource requirements of workload – Enough instances are started to support workload requirements– Goal: To remove hard coding of a service to a specific instance
or node
23
Server Pool• Logical division of the cluster into pools of servers • Applications (including databases) run in one or more
server pools• Managed by crsctl (applications), srvctl (Oracle) • Defined by 3 attributes (min, max, importance) or a
defined list of nodes– Min- minimum number of servers (default 0)– Max – maximum number of servers (default 0 or -1) – Importance – 0 (least important) to 1000
srvctl modify serverpool –g <name> –u <max>
24
Server Pool Example – Instance View
shared storage
External network – vswitch 1
clusterInterconnectHipersocketLinux Guest Linux Guest Linux Guest Linux Guest
Redo / Archive logs all instances
Database / Control files
Listener Listenerinstance 1 instance 2
Database 1
Min x Max x Imp x
OCR and Voting Disks
ASM Disk Groups
VIP1 VIP2 VIP4VIP3
Scan_LSNR Scan_LSNR Scan_LSNRGNS
Oracle Clusterware
ASM
LPAR - z/VM
Listener Listener
instance 1 instance 2
Min x Max x Imp 4
Database 2
25
Cluster Management via Server Pools
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
Free
OCR and Voting DisksASM Disk Groups
Oracle Clusterware
ASM
ERP Financial
Min 3 Max 3 Imp 10
Siebel
Min 2 Max 3 Imp 3
OBI
Min 1 Max 2 Imp 1
LPAR 1 LPAR 2 LPAR 3
z/VM
Linux Guest
Linux Guest
Linux Guest
26
Cluster Management via Server Pools
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
FreeERP Financial
Min 3 Max 3 Imp 10
Siebel
Min 2 Max 3 Imp 3
OBI
Min 1 Max 2 Imp 1
LPAR 1 LPAR 2 LPAR 3
z/VM
Linux Guest
Linux Guest
Linux Guest
Back Office
OCR and Voting DisksASM Disk Groups
Oracle Clusterware
ASM
27
Cluster Management via Server Pools
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
FreeERP Financial
Min 3 Max 3 Imp 10
Siebel
Min 1 Max 3 Imp 4
OBI
Min 1 Max 2 Imp 2
LPAR 1 LPAR 2 LPAR 3
z/VM
Linux Guest
Linux Guest
Linux Guest
Back Office
OCR and Voting DisksASM Disk Groups
Oracle Clusterware
ASM
28
RAC One Node Infrastructure overview
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
LPAR 1 LPAR 2 LPAR 3
z/VM
OCR and Voting DisksASM Disk Groups
Linux Guest
Linux Guest
Singl Inst 1 Singl Inst 2 Singl Inst 3 Singl Inst 5Singl Inst 4
• Multiple instances in a linuxguest
Oracle Clusterware
• Single Cluster foundation
ASM
• Shared Disk layer ( ASM )
• Multiple LPAR, linuxguests
29
RAC One Node deploymentInstance Caging
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
LPAR 1 LPAR 2 LPAR 3
z/VM
OCR and Voting DisksASM Disk Groups
Linux Guest
Linux Guest
Singl Inst 1 Singl Inst 2 Singl Inst 3 Singl Inst 5Singl Inst 4
Oracle Clusterware
ASM
Linux guest 5 Virtual CPU
Singl Inst 1Singl Inst 2
Limit : Up to 4 CPU
Limit : Up to 2 CPU
30
RAC One Node deploymentOmotion
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
LPAR 1 LPAR 2 LPAR 3
z/VM
OCR and Voting DisksASM Disk Groups
Linux Guest
Linux Guest
Singl Inst 1 Singl Inst 3 Singl Inst 5Singl Inst 4
Oracle Clusterware
ASM
Singl Inst 2Singl Inst 2
• Patch Oracle binaries, modify Linux parameters, etc..
31
RAC One Node deploymentOmotion
z/VM
Linux Guest
z/OS
shared storage
z/VM
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
# IFLsMemory
# OSA Card
# FC Cards
LPAR 1 LPAR 2 LPAR 3
z/VM
OCR and Voting DisksASM Disk Groups
Linux Guest
Linux Guest
Singl Inst 1 Singl Inst 3 Singl Inst 5Singl Inst 4
Oracle Clusterware
ASM
Singl Inst 2Singl Inst 2
• Restart Instance Service
32
When To Use What?
• Oracle Enterprise Edition / Oracle ClusterWare protection– Standard HA requirements—tolerate unplanned and planned outages– Fits wiith a single instance strategy– Avaialble with Oracle 10gR2 database– Failover protection – minimal donwtime
• Oracle RAC One Node – Faster Failover + Omotion– Fits within a single server– Online scale-out to multi-node RAC
• Oracle RAC – Business critical applications—almost zero downtime– Performance intensive applications requiring horizontal scalability
33
<Insert Picture Here>
Agenda
• Brief introduction to Cloud – Main lessons to leverage Oracle Solutions on IBM System z running
Linux
• Main Oracle Database 11gR2 foundations for Cloud deployments
• Oracle Clusterware– SCAN– Multiple network
• Server pool management• Oracle Rac One Node
• Deploying an elastic and heterogeneous application– From Application to infrastructure
– Weblogic/Coherence/Java application– Oracle GRID foundations– Oracle Database features
34
Oracle Fusion Middleware and Weblogic
Proven to Outperform
Best Foundationfor the Oracle
PortfolioLowest Operational Cost
Coherence EE
JRockit Real Time*
WebLogic ServerJava EE: Reliability, Availability, Scalability & Performance
High Performance, Reliable, Scale Out for Java, C++ and .NET
High Performance JVM with Extreme Low Latency
WebLogic Suite
Ent
erpr
ise
Man
ager
Adm
in a
nd O
pera
tions
Developm
ent Tools
Jdeveloper/Eclipse
Java EE/ISVApps
SOASuite
WebCenterSuite
ContentManagement
Suite
IdentityManagement
Suite
BusinessIntelligence
Suite
35
Oracle Fusion Middleware and WeblogicCoherence In-Memory Data Grid
• Memory spans multiple machines (nodes)
• Online addition/removal of nodes
• Automatically partition and exploit all memory
• Reliability through redundancy
• Performance through parallelization
• Scale linearly to thousands of nodes
WebLogicServer
Coherence=Coherence
35© 2010 Oracle Corporation
36
Bidding engineWeb Application
Obay
Base de donnéesBILB
Service APPLINoeud BILB2Service APPLI
Noeud BILB1
Multi Data Source (APPLI)
DS1 DS2
Java Objects
SCAN
Application Server
Weblogic Server
• Deploy Java Application in a heterogeneous environment including the IBM system z running linux.
• Implement a high-Available End2End architecture.• Database and Application
layers• Start Application services
where and when resources are available.
Oracle Gridlink model for Java Application
37
z/VM z/VM z/OS
Hipersocketnetwork
External vswitchnetwork
( n vCPU)
OS / Linux
Hipersocket network
Disk SubsystemSAN
ASM +DG Database 11gR2
ASM +DG Database 11gR2
ASM +DG Shared FRA
OS / Linux
( n vCPU) Oracle Clusterware
Oracle ASMOS / Linux
( n vCPU) ( n vCPU) Oracle Clusterware
Oracle ASM
Hipersocket network
ASM +DG Database 11gR2
ASM +DG Database 11gR2
Active Dataguard
z/VM
OS / Linux
Bidding engine
Multi Data Source (APPLI)
DS1
DS2
Java Objects
Weblogic Server
Web applicationOBAY
Application Server
OS / Linux
Bidding engineJava Objects
Application Server
OS / Linux
Bidding engine
Multi Data Source (APPLI)
DS1
DS2
Java Objects
Weblogic Server
Web applicationOBAY
Application Server
OS / Linux
Bidding engineJava Objects
Application Server
OS / Linux
Bidding engine
Multi Data Source (APPLI)
DS1
DS2
Java Objects
Weblogic Server
Web applicationOBAY
Application Server
OS / Linux
Bidding engine
Multi Data Source (APPLI)
DS1
DS2
Java Objects
Weblogic Server
Web applicationOBAY
Application Server
• Deploy final application in heterogeneous model
• Database hosting OBAY schemas• Application tier on a specific server .
• High-Availability and scalability• Application layer
• Create a Coherence Cluster • Create a Weblogic cluster
• Deploy new members on seperated servers
• Database layer • Provision new linux guest
integrated to the application layer• Leverage resources where
available• Extend the Coherence cluster on
a linux guest• Extend Weblogic domain on linux
on z• Elaborate a D/R
• Deploy Active DataGuard
Deploy a Java application in a heteregenousenvironment
Bidding engineJava Objects