Season 1 : The Foundations · 2013-01-23 · Business Logic Protection ... 9z/VM provides native support to SCSI disks for paging, spooling and other system devices 9Support is provided
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
IBM Light Benchmark offering for Oracle solutions on System z
Purpose: Proof of Concept• Validation of new System z solutions • Validation of new Oracle releases• Validation of new System z features usage• Validation of customer applications before a
benchmark.
Terms• 1 to 4 weeks• Platform (remotely through Open VPN)• Oracle AS on one Linux guest + Oracle RAC on 2
guest Linux (cluster), 4GB and 4CP per guest
Support• First level support provided by IBM tech center staff• Second level support from System z Benchmark
Center and the Oracle/IBM Joint Solution Center
hipersocket
Oracle SOA
J2EE engine
http ServerASMASM
RAC
Linux guestSLES 9 4CPs shared 4GB Mem
Linux guestSLES 9/10RHEL 4/5 4CPs shared 4GB Mem
Linux guestSLES 9/10RHEL 4/5 4CPs shared 4GB Mem
05/08/09
5
9 05/08/09
Customer expectations
Oracle / IBM Integrated SolutionsWhat we learned from our customer engagements.
Reliability11
Open Standards22
High-Availability33
Agility / Provisioning 55
Security66
Performance77
Virtualization44
Integration 99
Manageability / Monitoring88
Application Portfolio1010
IBM
Z10 Architecture
LINUX
Z10 + Z/VM + LPAR
z/VM
Crypto Card
Z10 processor and Architecture
IBM Hardware
Oracle
LINUX
Oracle RAC, CRS
Oracle’s Grid Architecture (ASM,CRS,RAC)
Data Vault / Crypto Integration
Oracle DB and Options
Audit Vault / GridControl
Oracle TG
Oracle Applications
10 05/08/09
Agenda• Oracle/IBM Joint Solutions Center• IBM System z running Linux: Standardize & Virtualize Oracle
Solutions• IBM System z running Linux & Oracle Maximum Availability
Virtualized Oracle Solutions with IBM System z running Linux (1/2)
• Application support layerOpen, reliable operating systemVirtual server awareness infrastructureEnterprise applications
• Hypervisor layer (z/VM)Shared-memory based virtualization modelHighly granular resource sharing and simulationFlexible virtual networkingResource control and accountingServer operation continuity (failover)Server maintenance tools and utilities
• Hardware layer (z/Architecture)Legendary reliability, scalability, availability, securityLogical partitioning (LPAR)Processor and peripheral sharingInter-partition communicationVirtualization support at the hardware instruction level (PR/SM)
Application Layer
Hypervisor Layer
Hardware Layer
12 05/08/09
Virtualized Oracle Solutions with IBM System z running Linux (2/2)
Phase I Phase II Phase III Phase IV
Project
Database designDatabase schema
User responsibilitiesData protection
…/…
ProjectImage
project Layer
z/VM Memory,
ProcessorsNetwork
DisksSecurity
…/…
z/VMImage
Z/VM Layer
Linux Linux distribution
Linux configurationRPM
OS User privilèges…/…
OSimage
Linux Layer
Oracle
Oracle productsOracle setupsOracle patches
Database security…/…
OracleImage
Oracle Layer
CustomizedBuilding
Block
CustomizedBuilding
Block
Marketplace unique capability: Linux provisioning using z/VM virtualization
05/08/09
7
13 05/08/09
What does System z bring to Linux• The most reliable hardware platform available
Redundant processors and memoryError detection and correctionRemote Support Facility (RSF)
• Designed to support mixed workloads Allows consolidation while maintaining one server per applicationComplete workload isolationHigh speed inter-server connectivity
• Centralized Linux systems are easier to manage• Scalability
System z10 EC scales to 64 application processors System z9 EC scales to 54 application processorsSystem z9 BC scales to 7 application processorsUp to 8(z9) 11(z10) dedicated I/O processors Hundreds of Linux virtual servers
Integrate your System z in your SAN infrastructure
• z/VM support for SCSI industry-standard devicesSystem z attachment to SCSI devices is provided by FCP devicesz/VM provides native support to SCSI disks for paging, spooling and other system devicesSupport is provided by emulating SCSI disk LUNs to VM as 9336 FBA 512-Bytes blocks DASDIPL, Dump, and Service from/to SCSI disk LUNs is provided to achieve a SCSI-only VM environment.SCSI-only as well as mixed SCSI and ECKD environments are supportedz/VM provides support for multipath in order to take advantage of hardware redundancy.
16 05/08/09
Integrate your System z in your SAN infrastructure
z/VM multipath implementation
05/08/09
9
17 05/08/09
Integrate your System z in your SAN infrastructure
• Linux on IBM System z supports FCP devices in the following configuration:
Native LPAR modeAs a z/VM Guest
• When running as z/VM guests, Linux systems can use SCSI devices by the following ways:
LUNs Direct attachment As emulated 9336 FBA 512-byte block DASD
• Linux uses zfcp driver to exploits SCSI architecture• SCSI-only as well as mixed SCSI and ECKD
environments are supported
18 05/08/09
Integrate your System z in your SAN infrastructure
• Linux on IBM System z supports :Multipath to SCSI devices:
N_PORT ID Virtualization (NPIV)Allows FCP WWPN VirtualizationEnable standard method for SAN infrastructure access restriction (LUN masking, Zoning)Available on z9EC, z9BC and z10 processor typesRequires NPIV-enabled SAN switches
SAN Volume ControllerVirtualizes your storageUse multiple Storage subsystems (different models from differents vendors)
05/08/09
10
19 05/08/09
System p
System zSystem x DS8000
EMC HP Hitachi SUN IBMSystem z in switched fabric topology
20 05/08/09
Customers are already « Building Up »…
Business Logic Protection
Integrate existing z/OS applications through Oracle Integration tools (OAM for CICS)
Middleware integration
Infrastructure Modernization
Oracle Solutions integrated to z/OS
Simplify global infrastructure on IBM System z with Oracle Solutions
Infrastructure Simplification to reduce Complexity & increase resources utilization.
Standardize your Database pool, Oracle & MS SQL, to simplify management.
Infrastructure Simplification
Increase footprint with System z
Application based on Oracle 10g Database & 10g AS + SOA can be fully implemented on System z running Linux
More & more ISVs evaluate their applications on System z running Linux
New Workloads on System z - Linux
Applications on System z Linux
Leverage technology expertise
for their core Applications
Oracle iFlex : Full System z architecture
Oracle EBS, PeopleSoft, Siebel : Split Tier
05/08/09
11
21 05/08/09
Available Oracle solutions on IBM System z running Linux
Linux on System z
Oracle Database
FusionMiddleware
Applications
Enterprise Manager
• Data SolutionsOracle Database EEOracle Database SEOracle Database clientOracle Warehouse BuilderOracle Business Intelligence EE (Split tier)
• Middleware SolutionsOracle Application ServerOracle Containers for J2EE (OC4J)Oracle Top LinkOracle AS Metadata Repository Creation Assistant
• Management SolutionsOracle ClusterwareOracle Configuration ManagerOracle Grid Control Agent
• Integration SolutionsOracle Transparent Gateway for DRDA
• ApplicationsPeoplesoft Enterprise (Split tier)Siebel (Split tier)eBusiness Suite (Split tier)
• Oracle Clusterware is a complete cluster software solution
• Including advanced functionality:Failure Notification (FaN)Support for 3rd party cluster softwareHA-API for all kind of applicationsFully integrated with Oracle RAC
• Low cost and flexibility:No need to purchase additional softwareEasy to install & to manageSupports 100 nodes on all OS’ certified for Oracle RAC Single-vendor support
Oracle MAA: Clusterware Architecture (3/3)Increase Linux serviceability and availabilityImprove the availability of your applications thru CRS services
SCRIPTS
Application VIPRules / Dependencies
Protected ApplicationCustomized scripts
Linux 1
Shared disks for cluster management
Cluster interconnect (private network)heartbeat
Oracle Clusterware
Virtual IP 1
R/W R/W
SCRIPTS
Application VIPRules / Dependencies
Protected ApplicationCustomized scripts
Linux 2
Oracle Clusterware
Virtual IP 1Virtual IP 2
05/08/09
14
27 05/08/09
Agenda• Oracle/IBM Joint Solutions Center• IBM System z running Linux: Standardize & Virtualize Oracle
Solutions• IBM System z running Linux & Oracle Maximum Availability
Eliminates need for conventional file system and volume managerASM extends SAME (Stripe and Mirror Everything)Improved performance, scalability, and reliability
Disk 3
Disk 2
Disk 1
Disk 3
Disk 2
Disk 1
Without ASM With ASM
Provisioning storage when you need it… and save money
Conventional wisdom
Disk group
Oracle DB Instance
ASM Instance
ASM is Oracle’s integrated Clusterware
• Capacity on demandAdd/drop disks onlineAutomatic I/O load balancingStripes data across disks to balance loadBest I/O throughputAutomatic mirroring and stripping
• Easy to manage• Can only host Datafiles, not binaries
05/08/09
15
29 05/08/09
Oracle MAA: Automated Storage Manager (2/2)
Disk Group DB 01
Shared Flash Recovery Aera
Disk Group DB 02
ASM
Z/VM
SingleInstance
DB01
CRS
+ASM
IFL
Memory
OSAZ/VM
LPAR
hipersocket
VIP01
Voting OCR
CRS
+ASM
VIP02
Failoverconfiguration
Integrated in CRS to be used as a Cluster Logical Volume Manager
Take advantage of running Linux under z/VM to add/remove storage in a maximum dynamic way
Manage your Disk Groups directly from a central point of administration: the Grid Control Server (Oracle Enterprise Manager)
30 05/08/09
Agenda• Oracle/IBM Joint Solutions Center• IBM System z running Linux: Standardize & Virtualize Oracle
Solutions• IBM System z running Linux & Oracle Maximum Availability
• IBM / Oracle MAA Design Review• Recommendations• Q&A
54 05/08/0954 05/08/09
Customer overview• US based financial company• Customer in Unix environment
• 132 processors• 164 GB memory
• SAN storage• 500 GB database size
• Application• Custom developed• Over million transactions during peak hours
• Problem• Vertical Scalability
05/08/09
28
55 05/08/0955 05/08/09
PoC Environment• System z (30 + IFLs)• Linux (SLES 10)• Oracle 10.2.0.3• Stand alone database• Grid agent• Custom developed application
• Lot of chattering between application and database • Application was divided into a sub-set application layer to co-
exist along with database layer
56 05/08/0956 05/08/09
PoC Testing • Simulated work load• Single LPAR with just Linux
• Co-existing sub set application• Database Server
• Two Linux guests under z/VM• sub set application at one Linux guest• Database Server at another Linux guest
• Production test to simulate expected peak load for a period of 20 minutes
• 25%, 50%, 75%, 100% load• Throughput time, pending txns at any time
05/08/09
29
57 05/08/0957 05/08/09
Problems / solutions during PoC• Every transaction needs a commit (logs filled too
quickly, log I/O activity in the top 5 time event)• Redistributed the redo log files,• Increased the redo log file size• Relocate undo tablepsace
• I/O issues• Increased the PCTFREE for some indexes• disk_async_io=true• filesytemio_options = setall• unset DB_FILE_MULTIBLOCK_READ_COUNT
58 05/08/0958 05/08/09
Problems / solutions during PoC• Flash recovery
• Disable the flashback recovery • SGA adjustment:
• Shared_pool, • db_cache_size (fixed)
• Identified poor performing SQL queries• Recommended to customer to change the select statements
• Lot of buffer gets • TB Scan
• Upgraded from oracle 9i • Removing the index hints
05/08/09
30
59 05/08/0959 05/08/09
Oracle MAA design review• Design Review
• Oracle and IBM conducted a MAA design review with the customer
• Two days on-site review
• What we found• No virtualization • No ASM• No High Availability• No DR• Basically No MAA
60 05/08/0960 05/08/09
Oracle MAA design review• IBM / Oracle joint recommendations
• Upgrade to 10.2.0.4• Fix the identified queries• Explore the possibility to use Stored Procedures • Optimize some more Oracle initialization parameters (9i to 10g)• Work with the customer to optimize SQL*NET messages• Segregate DB server from AS• Implement ASM for storage management• Implement CRS for single instance protection• Implement RAC for the long run• Implement disaster recovery using Data Guard
05/08/09
31
61 05/08/09
Client #David J Simpson
• 24-IFL z10 – Oracle RAC and ASM running under Red Hat 4 with Disaster Recovery to another Data Center.
• Mission Critical System.• DASD Storage – RAC under z/VM using ASM
• Biggest Pitfall – Mini Disk Cache On when using 2 LPARs different SCN (System Change Number) between LPARs caused database crash.
• Why Project was so successful. ASM, RAC, z/VM IFLs, hit the dates with flying colors by being able to provision quickly.
62 05/08/09
Client #2
• 8 - z9 IFLS – db2 transactional database via Hipersockets to Oracle Data warehouse under System z Linux, VM 5.3
• Cut Database load time from db2 to distributed servers from days, and their key seasonal peak load update period from a week to complete to one day.
• Biggest Pitfall – Testing and inefficient code, threw hardware at the problem which did not scale well.
• Why Project was successful. ASM’s ability to spread workload across many disk storage devices. The ability to leverage Hipersockets, and the fact they had no more data center floor center space, they had the mainframe with capacity and they simply added IFLs.
05/08/09
32
63 05/08/09
Almost Live Client #3
• 100 IFL – z10 Oracle RAC environment across – 2 z10’s with Oracle ASM. 16 IFLs per machine for Oracle Production.
• 35 TB Database and 45 TB Flash Recovery Area (FRA)
• Pitfall – Upgraded to 10.2.0.4 and hit oprocd.bin reboots, set Diagwait to 13, and had issues with z/VM cleaning pages suspending the Linux guest for up to 12 seconds causing node evictions.
• Success Factor - Project is getting very high I/O throughput inserting 5.79 billion records in a 7 hour window and updating another 320 million records which allowed them to hit their SLAsfor the 5 year projection levels – with room to grow.
64 05/08/09
Large European company , with branches in many countries.
Large Oracle customer: running Oracle on Unix and z/OS.Main site : Production z9 713 + 6 zIIPs + Development z9 709 + 5 zIIPsSecond Site: Disaster Recovery + Development z9 703 + 3 zIIPs + CBUs
Core business application, completed in live
in October 2007, Oracle announced end of developments for z/OS.
Customer consulted partners and market influencers move to Unix.Stay on system z (running Linux)
Customer experiencesCustomer’s context
05/08/09
33
65 05/08/09
• Step 1 : First Architecture Workshop – Kick-Off – on site - May 2008Study different architecture options Define Project plan (System z Oracle Light Benchmark, On-site PoC)
• Step 2 : System z Oracle Light Benchmark – June 2008Remotely accessible PoC Environment offering created by PSSC.Goals: test different technical options & components, validate Migration process
• Step 3 : Design Workshop – on site – July 2008Validate Outcomes Light benchmark Review PoC architecture and define migration planBuild project plan for performance & sizing benchmark in MOP
• Step 4 : Customer’s onsite Proof-of-Concept – July/August 2008In depth additional functional tests + MOP Benchmark Kit development With remote support from PSSC
Assess the feasibility of migration and performance of main Customer application to a new platform based on Linux on System z and Oracle. The benchmark should allow platform configuration sizing, in order to deliver the Application backend service with no degradation, from the user's perception, compared to current production on zOS.
System z10
05/08/09
34
67 05/08/09
The benchmark was made of multiple assessments:
• OLTP: Based on “Customer in house” workload generation tool reproducing transactions for a typical working day
• Batch: End of month process. End of week process. Critical daily process.
• Migration: Export and rebuild all databases objects from zOS to linux on system z within a dedicated window timeframe (< 40h)
• Scalability: Assess the platform behavior with a workload increased up to 150% of the actual OLTP workload.
Customer experiencesBenchmark objectives and phases
68 05/08/09
Customer experiencesTargets and achievements
OK
OK
Status
Comparison with production elapsed time: • End of month: 46% improvement• End of weekl: 60% improvement• Daily process: 33% improvement
95% successfully injected transactions (>1.18M):• 90% with better response time
Results
Measure the elapsed time of 3 batch processes scheduled from zOS ctrl-M to ORACLE linux on System z databases.
Batch
Assess the capability of the new platform to support the OLTP workload based on injected transactions captured during a peak day on production system
OLTPObjective
Infrastructure: Linux on System z LPAR: 18CPs / 80GB - 30 % CPU utilization
Infrastructure: zOS LPAR: 4CPs / 80GBLinux1 on System z LPAR: 12CPs / 40GBLinux2 on System z LPAR: 12CPs / 40GB
CPU at peak load: 40% on zOS, 60% on Linux1, 100% on Linux2
05/08/09
35
69 05/08/09
Customer experiencesTargets and achievements
OK
OK
Status
Linearity up to 125% scaling factor.Additional tuning has to be done to improve 150% results.
Migration process achieved in 15 hoursincluding Database statistics updates
Results
Validate the new Infrastructure capability in terms of scalability
Scalability
Migrate the full databases from zOS to linux on System z within 40 hours.
Migration
Objective
Infrastructure: zOS LPAR: 4CPs / 80GBLinux1 on System z LPAR: 12CPs / 40GB
CPU utilization at peak load: 85% on zOS, 100% on Linux1
Infrastructure: Linux on System z LPAR: 18CPs / 80GB, except for the 150% test (28 CPs)
CPU utilization: 70%
70 05/08/09
OK
OK
Status
more than 95% successfully injected transactions~75% with better response time ** Only 4 runs have been done on RAC, Additional Database tuning detected.
more than 95% successfully injected transactions~80% with better response time
Results
Validate Application in ORACLE RAC environment with 2 nodes** Application never tested on RACFirst experience in Cluster mode.
RAC
Assess zVM capability to host multiple Linux guests Running application with workload 125%
zVMObjective
Infrastructure: zVM LPAR 18CPs / 80GBLinux on System z LPAR: 18CPs / 80GBCPU utilization: 80%