Top Banner
Lowering the cost of running Oracle by upgrading to HPE Gen9 Servers Comparing performance improvements of running Oracle on HPE BL660c Gen9 over legacy products Technical white paper
14

Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Mar 09, 2018

Download

Documents

doduong
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Lowering the cost of running Oracle by upgrading to HPE Gen9 Servers Comparing performance improvements of running Oracle on HPE BL660c Gen9 over legacy products

Technical white paper

Page 2: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper

Contents Executive summary .................................................................................................................................................................................................................................................................................................................................3 Introduction ....................................................................................................................................................................................................................................................................................................................................................3 Solution overview ......................................................................................................................................................................................................................................................................................................................................3 Solution components ............................................................................................................................................................................................................................................................................................................................ 4 Capacity and sizing ................................................................................................................................................................................................................................................................................................................................ 6

Workload description ..................................................................................................................................................................................................................................................................................................................... 6 Workload data/results ................................................................................................................................................................................................................................................................................................................... 6 Analysis and recommendations ........................................................................................................................................................................................................................................................................................... 8

Summary .......................................................................................................................................................................................................................................................................................................................................................... 8 Implementing a proof-of-concept ...................................................................................................................................................................................................................................................................................... 8

Appendix A: Bill of materials .......................................................................................................................................................................................................................................................................................................... 8 Appendix B: Linux kernel boot options .............................................................................................................................................................................................................................................................................10 Appendix C: Oracle Linux 7.1 tuning ...................................................................................................................................................................................................................................................................................... 11 Appendix D: Oracle database parameters ....................................................................................................................................................................................................................................................................... 11 Appendix E: Multipath configuration ................................................................................................................................................................................................................................................................................... 12 Appendix F: udev rules ..................................................................................................................................................................................................................................................................................................................... 13 Resources and additional links .................................................................................................................................................................................................................................................................................................. 14

Page 3: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 3

Executive summary The demands of database implementations continue to escalate. Faster transaction processing speeds, scalable capacity, and increased flexibility is required to meet the needs of today’s business. At the same time, enterprises are looking for cost-effective, open-architecture solutions that don’t include vendor lock-in or carry the high price tag attached to single-vendor, proprietary solutions.

The reliability of Hewlett-Packard’s servers is unsurpassed. As a result, these machines can continue in service long after they have been fully depreciated. While these servers can continue unimpeded, there are software costs that can sometimes dwarf the hardware costs, often times licensed by the core. One G7 core is different from a Gen9 core, meaning that a Gen9 core at a given clock speed can do more useful work than a G7 core at the same clock speed. This reference architecture showcases the performance advantages of running the same workload on multiple generations of hardware, allowing our customers to make more informed decisions when choosing when and what to upgrade in their datacenters.

Customers today require high performance, highly available flexible database solutions. The HPE ProLiant BL660c Gen9 and HPE 3PAR StoreServ 7450c All-flash array combination running Oracle 12c delivers just that by providing a fully tested flexible, high performance, and high-availability reference architecture.

Customer workload characteristics and requirements vary. Hewlett Packard Enterprise solutions are tailored to provide maximum performance for various workloads without compromising availability commitments required by the business.

Target audience: This HPE white paper is designed for IT professionals who use, program, manage, or administer large databases that require high availability and high performance. Specifically, this information is intended for those who evaluate, recommend, or design new IT high performance architectures. It includes details for Oracle 12c deployments requiring extreme performance and uptime.

Oracle 12c database and Oracle Enterprise Linux® installations are standard configurations except where explicitly stated in the reference architecture. This white paper describes testing performed in June and July of 2015.

Document purpose: The purpose of this document is to describe a recommended configuration, highlighting recognizable benefits to technical audiences.

Introduction Information Technology departments are under pressure to add value to the business, improve existing infrastructure, enable growth and reduce overhead. Customers utilizing Oracle 12c Enterprise Edition license their software by the core. A reduction in the number of cores used for Oracle databases results in a commensurate reduction in license and support costs. If we are able to sufficiently reduce the number of cores utilized to provide Oracle database services, we can provide a return on investment in a single year. Further, that return on investment continues to pay dividends in years 2 and beyond.

The purpose of this paper is to compare the expected performance of a BL660c Gen9 server with that of a like configured BL680c G7 server, so that a customer is able to understand the additional level of performance that can be provided by the BL660c Gen9 server. Further, customers with equipment older than the G7 can expect greater additional performance.

Solution overview This white paper outlines the architecture and performance you can expect from the solution built on HPE ProLiant BL660c Gen9 Servers and the HPE 3PAR StoreServ 7450 All-flash array running Oracle 12c and Oracle Linux 7.1 with the Unbreakable Enterprise Kernel version 3.8.13-55.1.6.

This reference architecture focuses primarily on the design, configuration, and best practices for deploying a highly available extreme-performance Oracle database solution. The Oracle and Oracle Enterprise Linux installations are standard configurations except where explicitly stated in the reference architecture.

Page 4: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 4

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

SAS SAS SAS SASSAS SAS SAS SAS

SAS SAS SAS SAS SASSAS

SAS SAS

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

SAS SAS SAS SASSAS SAS SAS SAS

SAS SAS SAS SAS SASSAS

SAS SAS

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

SAS SAS SAS SASSAS SAS SAS SAS

SAS SAS SAS SAS SASSAS

SAS SAS

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

SAS SAS SAS SASSAS SAS SAS SAS

SAS SAS SAS SAS SASSAS

SAS SAS

Bay1

Bay9

DSModule

PS1

Bay8

Bay16

PS6

HPBladeSystem

c7000Enclosure

UID

FLEX1

FLEX2

FLEX3

FLEX4

FLEX5

FLEX6

HP ProLiantBL680c G7

UID

ProLiantBL660c Gen9

2 1

4 3

4 X 8Gb Fibre Channel

• c7000o BL660c Gen9 4 X Intel Xeon E5-4655 v3

2.9GHz 6-core processors 512GB Memory 2 X FlexFabric 2-port

Adaptero 2 X 8Gb FCoEo 2 X 2Gb FlexNIC

o BL680c G7 4 X Intel Xeon X7560

2.27GHz 8-core processors

512GB Memory 1 X FlexibleLOM

o 2 X 8Gb FCoEo 2 X 2Gb FlexNIC

1 X FlexHBAo 2 X 8Gb FCoE

• 3PAR StoreServ 7450c All Flash Arrayo 4-nodeo 192GB of cacheo 6 X Expansion Shelveso 80 X 480GB MLC SSDo 24 X 8Gb FC ports

BL660c Gen9 BL680c G7 (one server, two full-height slots)

StoreServ 7450c Controller Nodes

StoreServ 7450c Expansion Drive Shelves

c7000 Enclosure

UID

ProLiantBL660c Gen9

2 1

4 3

Figure 1. Example reference architecture showing HPE BladeSystem components

Solution components Both the BL660c Gen9 server blade and the BL680c G7 server blade are used as Oracle database servers.

We have compared and contrasted performance using two criteria. The two criteria are:

1. A single server environment where we reduce the number of cores from 32 G7 cores to 24 Gen9 cores, while at the same time allowing for room to grow the usage of the database.

2. An Oracle RAC environment where we looked at a 24-core G7 server environment as compared with a 24-core Gen9 environment.

Both criteria have the same focus: To identify the performance improvements over legacy hardware intended to reduce the number of Oracle Enterprise Edition licenses a customer needs to achieve the same business outcomes.

HPE’s enterprise portfolio offers many form factors to satisfy a diverse set of customer needs. Many times, our customers have standardized on either rack servers or blade servers. Each form factor has features and benefits that may help a customer make the decision regarding which type of server would best suit their Oracle needs. For this comparison, server blades were chosen because they provide superior datacenter density.

The HPE 3PAR StoreServ 7450c All-Flash array was utilized because it delivers best-in-class throughput, in addition to having an administrative interface that is consistent across the entire 3PAR product suite. When deploying Oracle, it is imperative to eliminate potential bottlenecks, so

Page 5: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 5

that the server can be fully utilized, because it is the server cores that form the basis for Oracle licensing. Introduction of latencies consistent with spinning magnetic media may cause a less than full utilization of the server. However, HPE does offer the 3PAR StoreServ 7400 and 7440 on which SSD may be intermixed with spinning media. We can then, either choose to put hot tablespaces and logfiles on SSD while locating less utilized datafiles on spinning media, or we can let the 3PAR Adaptive Optimization automatically choose where to put data files based upon utilization and migrate data as utilization trends change.

The following settings were used to obtain optimal performance.

HPE ProLiant BL660c Gen9 Server BIOS • Hyper-Threading—Enabled

• Intel® Turbo Boost—Enabled

• HPE Power Profile—Maximum Performance

• Minimum Processor Idle Power states—No C-states in the BIOS

Storage configuration best practices • UDEV settings for performance: Set UDEV parameters per values in Appendix F.

• Set the sysfs “rotational” value for SSD disks to 0.

• Set the sysfs value rq_affinity to 2 for each device. Request completions all occurring on core 0 cause a bottleneck. Setting rq_affinity to a value of 2 resolves this problem.

• Set I/O scheduler to noop (no operation).

• Set permissions and ownership for Oracle volumes.

• SSD loading—Load SSDs in groups of four per enclosure at a minimum.

• Volume size—Virtual volumes should all be the same size and SSD type for each Oracle ASM group.

• Multiple paths to maintain high availability while also maximizing performance and minimizing latencies. Use recommended multipath parameters (see Appendix E).

Oracle configuration best practices Oracle database parameters (see Appendix D).

• Set HUGE pages only.

• Disable automatic memory management if applicable.

• Set buffer cache memory size large enough per your implementation to avoid physical reads.

• Enable NUMA support.

• Create two large redo log file spaces of 450GB each to minimize log file switching and reduce log file waits.

• Create an undo tablespace of 512GB.

• Configure huge pages (see Appendix C).

Page 6: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 6

Capacity and sizing Based upon the results of our testing, a customer is able to:

• Reduce the number of Oracle RAC servers by one for each of their three currently deployed RAC servers, while holding core count within each of the servers constant. In this scenario, if a customer had three older BL680c G7 running Oracle RAC on multiple servers, they could deploy two BL660c Gen9 RAC servers and expect the same level of throughput. In this example, each of the servers would have 24 cores total, so we’d be eliminating the Oracle licenses for 24 cores by replacing 3 older servers with 2 newer servers. Likewise a customer who currently deploys six older BL680c G7 servers, could expect the same level of throughput by deploying four new BL660c Gen9 servers.

• In a single node configuration, we could replace a 32-core BL680c G7 Oracle server with a 24-core BL660c Gen9 server for the same workload and still have additional performance available to them. This does assume that no other bottlenecks, such as I/O contention, present themselves when consuming this additional capacity. Further, with the BL660c Gen9 server, additional processor options are available that would allow a customer to deploy a larger number of cores, which would provide additional levels of throughput.

Workload description The Oracle workload is tested using HammerDB, an open-source tool. The tool implements an OLTP-type workload (60 percent read and 40 percent write) with small I/O sizes of a random nature. The transaction results have been normalized and are used to compare test configurations. Other metrics measured during the workload come from the operating system and/or standard Oracle Automatic Workload Repository (AWR) stats reports.

The OLTP test, performed on a 1 TB database, was both highly CPU and moderately I/O intensive. The environment was tuned for maximum user transactions and maximum percentage database usage efficiency. After the database was tuned, the transactions were recorded at different user count levels. Because many workloads vary so much in characteristics, the measurement was made with a focus on maximum transactions.

Oracle Enterprise Database version 12.1.0.1 was used in this test configuration.

The databases use several different Oracle Automatic Storage Management (ASM) disk groups with a combination of RAID-5 and RAID-10. We used RAID-5 for tables and indexes and RAID-10 for redo logs and the undo tablespace. Each of the spaces consisted of eight virtual volumes that were exported from the 3PAR StoreServ 7450c to both the BL660c Gen9 and BL680c G7. Different virtual volumes, created with the same parameters were exported to both servers. We did not share storage between the servers for our tests.

We tested several different numbers of users for our tests, and found the optimal number to be 150 users. The optimal number of users was determined by scaling the number of users until we found the maximum transactions per minute were being delivered. In reality, the BL660c Gen9 server can accommodate tens of thousands of individual users, however, for our stress test, each user was doing tens of thousands of transactions per minute with no latencies or think time.

Workload data/results We ran two different sets of tests to ensure the readers of this paper understand the levels of performance available when using the BL660c Gen9 server. For comparison purposed, the same set of tests were run on a BL680c G7 server.

The first test we ran utilized the same number of cores in each platform. Each server had 24 cores. We did this by disabling 2 cores per processor in the BL680c G7 server, so that it had 24 cores available for Oracle workloads. The following graph shows the results of those tests.

Page 7: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 7

In all cases, the BL660c Gen9 server delivered more than 1.5 times the throughput of the BL680c G7 server. This leads us to the conclusion that in Oracle RAC environments, for every three BL680c G7 type servers currently deployed, we could replace them with two BL660c Gen9 servers with a like number of cores.

The second set of tests we ran were focused on a stand-alone environment for an Oracle Database deployment. We ran the BL660c Gen9 server with 24 cores against a BL680c G7 server having 32 cores. The following graph shows the performance provided by the BL660c Gen9 server relative to that of the BL680c G7 server.

In all cases, the BL660c Gen9 server outperformed the BL680c G7 server by at least 18%. This means that you can reduce the number of Oracle Database licenses from 32 to 24 in this case and at the same time, give your users headroom to grow in the future. In fact, at higher user counts, the percentage of increase expanded to 25%.

0%

25%

50%

75%

100%

125%

150%

175%

Run 1 Run 2 Run 3

Rel

ativ

e Th

roug

hput

Direct 24-core comparison

BL680c G7 24-core BL660c Gen9 24-core

0%

100%

200%

300%

400%

500%

600%

700%

0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160

Rel

ativ

e Th

roug

hput

Number of Connections

BL680c G7 32-core vs BL660c Gen9 24-core

BL680c G7 32-core BL660c Gen9 24-core

Page 8: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 8

Analysis and recommendations Based upon the results of our testing, this Reference Architecture addresses two different types of customer upgrade scenarios:

The first addresses a customer who has deployed Oracle RAC using older technology and would like to reduce the number of servers in their deployment, lowering the licensing burden with the same action. For those customers, a reduction of one server for every three currently deployed is a viable alternative.

The second scenario addresses a customer who has currently deployed Oracle in a single node configuration and is unable to further scale up within their existing server environment. For this type of customer who chooses to upgrade to Gen9 servers, we can offer additional headroom in which to grow their Oracle instance, while reducing the licensing requirements required to support the workload.

Summary The HPE ProLiant BL660c Gen9 server coupled with the HPE 3PAR StoreServ 7450c All Flash Array solution is a significant part of the overall HPE performance reference architecture portfolio. It was developed to provide high performance I/O throughput for transactional databases in a package that delivers business continuity, extreme IOPS, faster user response times and increased throughput versus comparable traditional server/storage configurations.

Our extensive testing found the tested configuration to prove the following:

• The reference architecture supports stable OLTP CPU intensive and moderately I/O stressed workloads.

• The HPE ProLiant BL660c Gen9 is a flexible, mission-critical, extreme database performance solution with deployment flexibility to meet the needs of the customers.

Implementing a proof-of-concept As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

This Reference Architecture describes solution testing performed in June and July 2015.

Appendix A: Bill of materials Below is the bill of materials (BOM) for the tested configuration. Variations of the configuration based on customer needs are possible, but would require using a separate BOM. Talk to your HPE Sales representative for detailed quotes.

Note Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details. hpe.com/us/en/services/consulting.html

Table 1a. Bill of materials for the server blade

QTY PART NUMBER DESCRIPTION

1 H6J66A HPE 11642 1075mm Shock Rack

1 681844-B21 HPE BLc7000 CTO 3 IN LCD Plat Enclosure

1 E5Y41AAE HPE OV 3yr 24x7 Encl FIO 16 Svr E-LTU

1 728352-B21 HPE BL660c Gen9 10/20GB FLB CTO Blade

1 792028-L21 HPE BL660c Gen9 E5-4655v3 2P FIO Kit

1 792028-B21 HPE BL660c Gen9 E5-4655v3 2P Kit

Page 9: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 9

QTY PART NUMBER DESCRIPTION

32 726719-B21 HPE 16GB 2Rx4 PC4-2133P-R Kit

2 766491-B21 HPE FlexFabric 10Gb 2P 536FLB FIO Adptr

1 749975-B21 HPE Smart Array P246br/1G FIO Controller

4 571956-B21 HPE BLc VC FlexFabric 10Gb/24-port Opt

32 455886-B21 HPE BLc 10G SFP+ LR Transceiver

2 733459-B21 HPE 2650W Plat Ht Plg Pwr Supply Kit

4 412140-B21 HPE BLc Encl Single Fan Option

1 433718-B21 HPE BLc7000 10K Rack Ship Brkt Opt Kit

1 677595-B21 HPE BLc 1PH Intelligent Power Mod FIO Opt

1 H1K92A3 HPE 3Y 4 hr 24x7 Proactive Care SVC

1 Opt. 7FX HPE c7000 Enclosure Support

1 Opt. SVQ HPE OneView for blades Supp

1 Opt YSM HPE BL660c Gen9 Support

1 HA114A1 HPE Installation and Startup Service

1 Opt. 5FY HPE Startup BladeSystem c7000 Infrast SVC

1 H6J85A HPE Rack Hardware Kit

2 H5M60A HPE 8.3kVA 208V 36out NA bPDU

1 BW932A HPE 600mm Rack Stabilizer Kit

1 BW930A HPE Air Flow Optimization Kit

1 BW906A HPE 42U 1075mm Side Panel Kit

1 HA113A1 HPE Installation Service

1 Opt. 5BY Rack and Rack Options Installation

1 HA124A1 HPE Technical Installation Startup SVC

1 Opt. 56H HPE Startup BladSys c7000 Encd Ntwk SVC

Table 1b. Bill of materials for the 3PAR array

QTY PART NUMBER DESCRIPTION

Rack and 3PAR Storage Infrastructure

1 BW904A HPE 642 1075mm Shock Intelligent Rack

1 E7X93A HPE 3PAR StoreServ 7450c 4N St Cent Base

20 E7W54B HPE M6710 480GB 6G SAS 2.5in MLC 5yr SSD

1 BC914B HPE 3PAR 7450 Reporting Suite LTU

1 BC890B HPE 3PAR 7450 OS Suite Base LTU

80 BC891A HPE 3PAR 7450 OS Suite Drive LTU

Page 10: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 10

QTY PART NUMBER DESCRIPTION

2 QK753B HPE SN6000B 16Gb 48/24 FC Switch

48 QK724A HPE B-series 16Gb SFP+SW XCVR

6 QR490A HPE M6710 2.5in 2U SAS Drive Enclosure

60 E7W54B HPE M6710 480GB 6G SAS 2.5in MLC 5yr SSD

1 QR516B HPE 3PAR 7000 Service Processor

1 TK808A HPE Rack Front Door Cover Kit

48 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl

8 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

4 H5M58A HPE 4.9kVA 208V 20out NA/JP bPDU

1 BW932A HPE 600mm Rack Stabilizer Kit

1 BW906A HPE 42U 1075mm Side Panel Kit

1 BD362A HPE 3PAR StoreServ Mgmt/Core SW Media

1 BD363A HPE 3PAR OS Suite Media

1 BD365A HPE 3PAR Service Processor SW Media

1 BD373A HPE 3PAR Reporting Suite Media

1 TC472A HPE Intelligent Inft Analyzer SW v2 LTU

Appendix B: Linux kernel boot options The boot options transparent_hugepage=never and intel_idle.max_cstate=1 were added to the kernel boot command line, and the option numa=off was removed (the latter option was added by the Oracle preinstall script and is not appropriate for NUMA-based servers). The option transparent_hugepage=never is recommended by Oracle (http://docs.oracle.com/database/121/UNXAR/appi_vlm.htm). Note that later versions of the Oracle Unbreakable Enterprise Kernel have Transparent HugePages removed. If the file /sys/kernel/mm/transparent_hugepage does not exist, there is no need to add the kernel boot option to disable Transparent HugePages.

Internal HPE benchmarking efforts have demonstrated optimal performance with the option intel_idle_max_cstate=1, which allows C-state 1 transitions and encourages TurboBoost functionality. Note that the Intel driver ignores the C-state settings in the BIOS.

The kernel boot options are set in /etc/default/grub in the GRUB_CMDLINE_LINUX entry:

GRUB_CMDLINE_LINUX=" crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet transparent_hugepage=never intel_idle.max_cstate=1"

Then the grub.cfg file must be updated as follows:

For BL680c G7: grub2-mkconfig –o /boot/grub2/grub.cfg

For BL660c Gen9: grub2-mkconfig –o /boot/efi/EFI/redhat/grub.cfg

Page 11: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 11

Appendix C: Oracle Linux 7.1 tuning The following modifications were made in the Oracle Linux operating system to achieve optimal performance:

1. Set SELINUX=disabled in /etc/selinux/config.

2. Utilize the tuned tool to easily configure Oracle Linux for optimal performance. Enable the tuned profile network-latency to set kernel tuning parameters and transparent_hugepage=never. This profile also invokes the latency-performance profile, which sets additional kernel tuning parameters, CPU C-states and P-states. The command “tuned-adm profile network-latency” enables the profile.

3. The usage of huge pages is recommended to improve the performance of virtual memory management and force the Oracle SGA to stay resident in memory (since huge pages are locked in memory). Configure huge pages by setting the kernel tuning parameters as follows (note that all of the settings below are specific to the amount of shared memory required by the Oracle instance):

vm.nr_hugepages = 77571

vm.hugetlb_shm_group = 54322

The Oracle installation creates the file /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf which includes the following settings for the oracle user account:

oracle soft memlock 475050463

oracle hard memlock 475050463

4. The following kernel tuning parameters were set in the file /etc/sysctl.conf by the Oracle 12c R1 preinstall script:

fs.file-max = 6815744

kernel.sem = 250 32000 100 128

kernel.shmmni = 4096

kernel.shmall = 1073741824

kernel.shmmax = 4398046511104

kernel.panic_on_oops = 1

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500

Appendix D: Oracle database parameters The following Oracle parameters were set in pfile.ora to enable NUMA, lock the SGA in memory, require the usage of huge pages, increase the priority of the log writer (LGWR) and Time Keeper (VKTM) processes, increase the log buffer size, increase the number of parallel log file writers, and increase the maximum number of Oracle processes.

_enable_NUMA_support=TRUE

_enable_NUMA_interleave=TRUE

lock_sga=TRUE

use_large_pages='ONLY'

_high_priority_processes='VKTM*|LG*'

Page 12: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 12

log_buffer=1048548K

max_outstanding_log_writes=4

processes=1500

Appendix E: Multipath configuration The following multipath parameters were included in the /etc/multipath.conf file, which are the recommended settings for Oracle Linux 7 and 3PAR Persona 2 (ALUA). Note that the fast_io_fail_tmo and dev_loss_tmo settings are required for the Broadcom CNA adapter, which was utilized in the BL660c Gen9. The BL680c G7 had an Emulex adapter, so those two parameters were commented out on that server.

defaults {

polling_interval 10

user_friendly_names yes

find_multipaths yes

}

devices {

device {

vendor "3PARdata"

product "VV"

path_grouping_policy group_by_prio

path_selector "round-robin 0"

path_checker tur

features "0"

hardware_handler "1 alua"

prio alua

failback immediate

rr_weight uniform

no_path_retry 18

rr_min_io_rq 1

detect_prio yes

fast_io_fail_tmo 10

dev_loss_tmo 14

}

}

Page 13: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 13

Appendix F: udev rules A udev rules file /etc/udev/rules.d/10-3par.rules was created to set device parameters to appropriate settings for SSD drives:

ACTION=="add|change", KERNEL=="dm-*", PROGRAM="/bin/bash -c 'cat /sys/block/$name/slaves/*/device/vendor | grep 3PARdata'", ATTR{queue/rotational}="0", ATTR{queue/scheduler}="noop", ATTR{queue/rq_affinity}="2", ATTR{queue/nomerges}="1", ATTR{queue/nr_requests}="128"

Note that rotational=0 is the default for SSD drives for Oracle Linux Server 7 with the Unbreakable Enterprise Kernel, and nr_requests=128 is also the default setting. Some tests were run with nr_requests=2048, so it was explicitly set back to 128. The default scheduler is deadline, but the recommendation for SSD drives is noop. The default for rq_affinity is 1. But improved performance has been seen with a setting of 2 which forces request completions to run on the requesting core. For nomerges, the default is 0, and setting it to 1 disables complex merge checks to merge contiguous I/O, which is recommended for random I/O.

The udev rules file 12-dm-permission.rules was created to set the required ownership of Oracle ASM LUNs:

ENV{DM_NAME}=="mpathq", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpathr", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpaths", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpatht", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpathu", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpathv", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpathw", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpathx", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

ENV{DM_NAME}=="mpathy", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

Page 14: Lowering the cost of running Oracle by upgrading to Create an undo tablespace of 512G B. ... The Oracle workload is tested using HammerDB, ... Lowering the cost of running Oracle by

Technical white paper Page 14

Sign up for updates

Rate this document © Copyright 2015-2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for HPE products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HPE shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Oracle is a registered trademark of Oracle and/or its affiliates. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

4AA6-1030ENW, May 2016, Rev. 2

Resources and additional links HPE BladeSystem hpe.com/info/bladesystem

HPE ProLiant Servers hpe.com/servers/proliant

HPE 3PAR StoreServ Storage hpe.com/storage

For more information about HPE solutions for Oracle

http://h17007.www1.hpe.com/us/en/enterprise/converged-infrastructure/info-library/index.aspx?app=oracle

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.