Top Banner
Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c June 2020 H18295 Design Guide Abstract This design guide describes two Oracle Database In-Memory use cases with Intel Optane PMem running in both Memory Mode (PMem-MM) and in App Direct Mode (PMem-AD) inside a Dell EMC PowerEdge server. Our goal is to determine if PMem technology can provide an alternative option to traditional DRAM and to other fast- tiered storage disks in a Database In-Memory environment. It also provides the steps to configure Oracle Database 19c with PMem as a persistent block-addressable storage device. Dell Technologies Solutions
45

New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Sep 16, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

June 2020

H18295

Design Guide

Abstract

This design guide describes two Oracle Database In-Memory use cases with Intel

Optane PMem running in both Memory Mode (PMem-MM) and in App Direct Mode

(PMem-AD) inside a Dell EMC PowerEdge server. Our goal is to determine if PMem

technology can provide an alternative option to traditional DRAM and to other fast-

tiered storage disks in a Database In-Memory environment. It also provides the steps

to configure Oracle Database 19c with PMem as a persistent block-addressable

storage device.

Dell Technologies Solutions

Page 2: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Copyright

2 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Copyright © 2020 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, and Optane are trademarks of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners. Published in the USA 06/20 Design Guide H18295

Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change without notice.

Page 3: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Contents

3 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Contents

Executive Summary ...................................................................................................................... 4

Intel Optane PMem operating modes ........................................................................................... 8

Dell EMC PowerEdge server support and best practices for Intel Optane PMem .................. 11

Oracle Database In-Memory use cases and tests ..................................................................... 13

References ................................................................................................................................... 38

Appendix ...................................................................................................................................... 39

Page 4: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Executive Summary

4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Executive Summary

The Oracle Database In-Memory (Database In-Memory) feature was first introduced with

Oracle Database 12c Release 1 (12.1.0.2). This feature performs analytical and

transactional processing and can significantly improve mixed workload performance. It

does so by implementing a “dual architecture” in the main memory of the database server.

In this memory architecture, data is maintained in the existing Oracle row format (buffer

cache), for Online Transaction Processing (OLTP) workloads, and in a new purely in-

memory (IM) columnar format that is optimized for Online Analytical Processing (OLAP) or

data warehousing workloads. The database stores the new in-memory columnar format

data in an area of memory called the In-Memory Column Store (IMCS). IMCS is also

called the In-Memory area. It is a new component of the database’s System Global Area

(SGA).

The Oracle Database In-Memory feature is implemented in the database server’s main

memory. Performance improvements that can be gained from this feature depend on the

database server's capabilities and resources, and especially on the server's computing

power and its memory capacity. The larger the database, the more physical memory is

required to accommodate and process the database in memory. This is especially true for

OLAP databases that can occupy multiple terabytes of memory.

The Oracle Database In-Memory feature includes several optimizations for accelerating

query processing in the server’s main memory. One of the ways that the In-Memory query

performance can benefit is by bringing as much of the OLAP workload data as possible

into main memory. However, the traditional and dominant form of main memory—

DRAM—is capacity-limited and expensive, especially at the higher end of the available

capacity range. The limited capacity sizes of individual DRAM DIMMs can limit customers

from reaching the true potential of Database In-Memory’s performance.

Intel Optane persistent memory (PMem)

Intel Optane persistent memory (PMem) is an innovative memory technology that delivers

a unique combination of larger main memory capacity modules and support for data

persistence. Dell EMC PowerEdge servers, based on the second generation of Intel Xeon

Scalable processors, can be deployed with a combination of DRAM DIMMs and PMem

modules. PMem can be configured as either volatile memory (Memory Mode) or as

persistent memory or storage (App Direct Mode).

In this design guide, we evaluated if PMem, which can provide large main memory

capacities in Memory Mode (PMem-MM), can be an effective alternative to traditional

DRAM in Oracle Database In-Memory environments. We conducted several tests to

capture the IMCS query performance of PMem-MM. We then compared its performance

with the performance of traditional DRAM-only configurations. This testing is identified in

this guide as Use Case 1.

Database In-Memory also provides several IM compression algorithm options to balance

IM query processing efficiency and IM space savings. Depending on the IM compression

level chosen, populating the source schema in the IMCS can be a CPU-intensive and

time-consuming process.

Introduction

Page 5: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Executive Summary

5 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Database In-Memory offers another feature called the In-Memory FastStart Area (IMFS).

IMFS allows the database to write the compressed IMCS data to a persistent disk area to

minimize the CPU usage and IMCS load times.

In our testing, we evaluated the performance benefits of the IMFS area created on several

fast-tiered storage disks, including PMem configured in App Direct Mode (PMem-AD).

This testing is identified in this guide as Use Case 2.

Use Cases 1 and 2 for Oracle Database In-Memory with PMem are described in detail in

Use Case 1: IMCS query performance and Use Case 2: IMCS load performance with and

without IMFS area, respectively.

Use Case 1: In-Memory Column Store (IMCS) query performance

In Use Case 1, we studied the Oracle Database In-Memory (IM) query performance on

two different OLAP schema sizes—1 TB and 3 TB TPC-H-like schemas—in combination

with different IM compression levels. These tests were conducted across different stand-

alone R740xd server memory configurations, described below:

Test 1.1: 1 TB TPC-H-like or OLAP schema IMCS query performance

• R740xd server1 DRAM-only configuration: This server included a total main

memory capacity of 768 GB using 12 x 64 GB DRAM DIMMs.

• R740xd server2 PMem-MM configuration: This server included a total main

memory capacity of 1.5 TB using 12 x 128 GB PMem-MM modules. This server

also included 384 GB of PMem front-end DRAM-cache using 12 x 32 GB DDR4

DIMMs. This server configuration had a DRAM:PMem-MM ratio of 1:4.

Test 1.2: 3 TB TPC-H-like or OLAP schema In-Memory query performance

• R740xd server3 DRAM-only configuration: This server included a total main

memory capacity of 1.5 TB using 24 x 64 GB DRAM DIMMs.

• R740xd server4 PMem-MM configuration: This server included a total main

memory capacity of 3 TB using 12 x 256 GB PMem-MM modules. This server also

included 768 GB of PMem front-end DRAM-cache using 12 x 64 GB DDR4

DIMMs. This server configuration had a DRAM:PMem-MM ratio of 1:4.

The key observations from the results of the two tested OLAP schema sizes in Use Case

1 are as follows:

• In smaller OLAP environments (1 TB in our study), DRAM DIMM’s speed has

the edge over PMem module’s speed for Database IM query performance.

When the OLAP dataset is small, all IM scans and query executions can be

accomplished inside the database’s IMCS and PGA memory components.

Therefore, the memory media speed is the primary driver of query performance.

As a result, for small OLAP schemas, the DRAM-only configuration delivers better

Database In-Memory query performance than the PMem-MM configuration. For

results and details, see test comparisons Results comparison 1: 768 GB DRAM-

only Cap High vs 1.5 TB PMem-MM Cap High.

Use cases

overview and

key observations

Page 6: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Executive Summary

6 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

• In larger OLAP environments (3 TB in our study), PMem-MM with its front-

end DRAM-cache architecture can deliver IM query performance equal to that

of the DRAM-only configuration. This is because with larger datasets, larger

dedicated In-Memory areas are required. This configuration will not usually leave

enough memory headroom for PGA sizes to perform all query sorting of a large

dataset in memory. Not enough PGA size causes I/O to TEMP space on disk to

significantly increase, leading to significant database wait times. These external

factors reduce the DRAM’s individual speed advantage over PMem-MM. For

results and details, see test comparisons Results comparison 1: 1.5 TB DRAM-

only Cap High vs 3 TB PMem-MM Cap High.

• Query-optimized IM compression level is not always the better option. It is

better to use a space savings-optimized IM compression level that allows the

entire source schema to fit inside the In-Memory area than to use a more query-

optimized compression level that causes data to spill over to disk. When source

data spills over to disk, full table scans and query processing take significantly

longer to finish due to disk latencies. For results and details, see test comparisons

Results comparison 2: 768 GB DRAM-only Cap high vs 768 GB DRAM-only Query

Low vs 1.5 TB PMem-MM Query Low and Results comparison 2: 1.5 TB DRAM-

only vs 3 TB PMem-MM: Cap Low and Query High tests.

• In larger OLAP environments (3 TB in our study), a query-optimized IM

compression level is a better choice than space-savings-optimized

compression level if we can populate entire datasets inside the IMCS with

both options. This is because with larger datasets, significant CPU cycles and

time are saved by not having to decompress the large IMCS data. This leads to

much faster full-table IM scans and IM execution times, which lead in turn to better

overall Database In-Memory query performance. Therefore, it is imperative to do

memory capacity planning and sizing to determine the best IM compression level

which will yield optimum query performance from the in-memory system. For

results and details, see test comparisons Results comparison 3: 3 TB PMem-MM:

Capacity High vs Capacity Low vs Query High. For memory sizing guidelines, see

Memory sizing.

• In larger OLAP environments (3 TB OLAP schema size in our study), PMem-

MM with its front-end DRAM-cache architecture can even outperform the

DRAM-only configuration’s IM query performance. In general, PMem-MM with

its larger memory module capacity advantage over DRAM DIMMs can provide

larger total main memory capacities than DRAM-only configurations. Thus, in

multi-terabyte OLAP environments, PMem-MM with its larger memory capacities

can not only help to populate larger datasets in-memory but can also help to bring

less-compressed source schemas into the in-memory area. With larger datasets,

less-compressed IMCS data (with a more query-optimized IM compression level)

delivers better query performance than more compressed data. For test results

and details, see test comparisons Results comparison 4: Best-case scenarios: 1.5

TB DRAM-only Cap High vs 3 TB PMem-MM Query High.

Page 7: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Executive Summary

7 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Use Case 2: IMCS load performance with and without In-Memory FastStart (IMFS) area

In Use Case 2, we compared IMCS load performance from the IMFS disk area on three

different fast-tiered storage disks—SAS SSDs, NVMe PCIe SSDs, and PMem in App

Direct Mode. We compared their performance with a baseline configuration that did not

implement the IMFS area, but instead loaded the IMCS from the source row-format

schema on disk (SAS SSDs in this case). The key observations from the results of the

test cases in Use Case 2 are shown below:

• The IMFS feature can load the data into IMCS at least four times faster than it

can load into IMCS from the source schema on disk, any time the database

must restart. This is because the data on the IMFS disk area is already

compressed and in columnar format. For results and details, see Test 2.1: IMCS

load duration: Baseline vs IMFS on fast-tiered storage.

• Overall IMFS load performance is dependent on the disk type (SAS SSDs in

this case) on which the source database resides. This is because the database

cross-checks the IMFS data against the source schema for transactional

consistency before loading it into IMCS. Therefore, for optimum IMCS load

performance from the IMFS disk area, we recommend that you choose the same

disk type as the disk type on which the source database is placed. For results and

details, see Test 2.2: IMFS-to-IMCS population: SAS-SSDs vs NVMe-PCIe-SSDs

vs PMem-AD.

• The IMFS feature can reduce CPU usage by up to 10 times during the IMCS

load operation, as compared to CPU usage when loading from source

schema. This is because the data on the IMFS disk area is already compressed

and in columnar format. For results and details, see Test 2.3: IMCS population:

CPU Utilization - Baseline vs IMFS on fast-tiered storage.

This guide is intended for anyone who is interested in learning about the benefits of this

study, including solution architects, Oracle DBAs, storage administrators, system

administrators, and Linux administrators. It provides:

• Intel Optane persistent memory operating modes and best practices

• Results of Oracle Database In-Memory use cases with Intel Optane PMem in both

Memory Mode and in App Direct Mode

• Intel Optane persistent memory configuration and best practices when operating as

a persistent block-addressable storage over App Direct Mode

• Oracle Database In-Memory sizing guidelines

Additionally, this guide has value for anyone who wants to evaluate, acquire, configure,

and operate Oracle Database In-Memory with Intel Optane PMem.

Dell Technologies and the authors of this document welcome your feedback on the

solution and the solution documentation. Contact the Dell Technologies Solutions team by

email or provide your comments by completing our documentation survey.

Author: Naveen Iyengar

Contributor: Reed Tucker

Audience and

purpose

We value your

feedback

Page 8: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Intel Optane PMem operating modes

8 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Note: For links to additional documentation for this solution, see the Dell Technologies Solutions

Info Hub for Oracle.

Intel Optane PMem operating modes

Intel Optane persistent memory (PMem) can be configured in two operating modes:

• Memory Mode (MM)—PMem acts like traditional volatile memory with any

DRAM in the server acting as PMem front-end cache

• App Direct Mode (AD)—PMem acts like persistent memory or storage

Details of both modes are explained in the following sections.

In PMem Memory Mode (PMem-MM), the operating system and any application (like the

database) sees the PMem as the volatile system main memory. Any DRAM installed in

the server acts as PMem’s front-end cache and is application-agnostic. The operating

system and the application require no changes to their code to use PMem as main

memory.

Figure 1. Memory subsystem with Intel Optane PMem in Memory Mode

In Memory Mode, the DRAM in the server acts as a cache for the most frequently

accessed data, while the PMem provides large memory capacity. The Intel Xeon Scalable

processor’s memory controller handles all cache management operations. When data is

requested from memory, the memory controller first checks the DRAM cache, and if the

data is present, the response latency is identical to DRAM. If the data is not in the DRAM

cache, it is read from PMem modules with slightly longer latency. If the data is not present

in the PMem modules either, then the data is finally fetched from the disk, resulting in

significantly higher latencies. Applications and workloads that have consistent data

retrieval patterns, the memory controller can predict the I/O, resulting in higher cache hit-

rates and performances close to DRAM-only configurations. Conversely, workloads with

highly random data access over a wide address range may see some performance

difference as compared to DRAM alone. Therefore, performance in Memory Mode

depends on both the nature of the workload and the DRAM:PMem capacity ratio. We

recommend a minimum of 1:4 ratio of DRAM:PMem on Dell EMC PowerEdge servers.

PMem Memory

Mode

Page 9: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Intel Optane PMem operating modes

9 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

In Memory Mode, PMem as system memory is application-agnostic and can potentially

deliver DRAM-like performance. PMem modules are available in larger capacities than

traditional DRAM RDIMMs/LRDIMMs. Therefore, in this design guide, we conducted

testing to determine whether PMem-MM is a viable option as the main memory for Oracle

Database In-Memory query performance. Details of this testing are described in Use Case

1: IMCS query performance.

In PMem App Direct Mode (PMem-AD), the data stored on PMem is persistent across

power cycles like any other persistent storage media. Any DRAM installed in the server is

seen as the main memory by the operating system and by any application (like the

database) installed in the system. For an operating system to access PMem that is

configured in App Direct Mode in the server’s Unified Extensible Firmware Interface

(UEFI), the operating system requires that it contain the specific PMem kernel driver

library or modules. An application (like a database) installed in the system, in turn, can

access the PMem devices in multiple ways—either as byte-addressable persistent

memory or as block-addressable persistent storage—depending on how the PMem

devices are configured at the operating system level. These two access methods of

PMem-AD are described further in the following sections.

Persistent Memory over App Direct (byte-addressable)

As shown in the following figure, to configure and access PMem directly as a byte-

addressable device (for example, using it as a direct load/store device like main memory)

requires that:

• The operating system contains the PMem driver module and supports a direct-

access (DAX) file system (fsdax or ext4), and

• The application code is modified to access PMem directly over a DAX file

system.

Figure 2. PMem – Persistent Memory (byte-addressable) over App Direct Mode

This is the fastest way for an application to access the PMem devices and to get the most

out of PMem’s performance capability. This is because it completely by-passes the

operating system’s I/O stack (page cache and block layer). This way of directly accessing

the data on PMem also eliminates the need for an application to copy the data into buffers

PMem App

Direct Mode

Page 10: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Intel Optane PMem operating modes

10 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

in DRAM. For example, for Oracle Databases, the buffer cache component of the SGA

would be considered to be the database buffer in DRAM. The ability for the database to

directly fetch the data from PMem would eliminate the need to copy the data first into the

buffer cache in DRAM.

At the time of this writing, Oracle Database 19c and older versions do not support the

direct-access (DAX) feature. Hence, PMem as a persistent memory over App-Direct Mode

is not yet a candidate to be used with Oracle Databases for storing datafiles.

Persistent Storage over App Direct (block-addressable)

Those applications that have not yet been modified to directly access the PMem devices

as byte-addressable can still access it as a traditional block-addressable persistent

storage device. Accessing it as a block storage, however, still requires that the underlying

operating system contains the necessary PMem driver module. PMem can be accessed

as a persistent block storage by configuring it in either ‘raw’ namespace or in ‘sector’

namespace mode at the operating system level, as shown in the following figure. Sector

mode is also called legacy mode and uses the standard file API.

Figure 3. PMem – Persistent Storage (block-addressable) over App Direct Mode

PMem is fundamentally a byte-addressable device. In ‘raw’ mode, although the

application accesses the PMem as a block device, underneath, the base PMem driver still

accesses and operates on the data at a byte-level. PMem in ‘raw’ mode can guarantee a

power-fail atomicity1 of only eight bytes. For applications like Oracle Databases that

operate in chunks of 8,192 bytes (8 K blocks), the ‘raw’ mode exposes them to the risk of

‘torn writes’ which could result in potential data loss or inconsistency in the event of a

power failure.

To mitigate this ‘torn writes’ problem, the ‘sector’ mode uses the Block Translation Table

(BTT) driver, which implements the copy-on-write optimization on top of the PMem driver,

as shown in the figure above. This helps to provide power-fail atomicity and lowers the

risk of data loss or inconsistency. The ‘sector’ mode is also called the legacy mode.

1 https://software.intel.com/content/www/us/en/develop/articles/persistent-memory-faq.html

Page 11: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Dell EMC PowerEdge server support and best practices for

Intel Optane PMem

11 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

For further details on PMem access modes, atomicity and the ‘torn writes’ problem, see

the Oracle blog https://blogs.oracle.com/database/persistent-memory-primer.

Since Oracle Database 19c and older versions do not yet support the direct-access (DAX)

feature, we tested the PMem as a persistent block-addressable storage (legacy mode)

over App Direct Mode. In legacy mode, PMem as a persistent block storage was used

and tested as the In-Memory FastStart (IMFS) disk area. The details of this testing are

described in Use Case 2: IMCS load performance with and without IMFS area.

At the time of this writing, storing Oracle Database 19c datafiles on Intel Optane PMem as

persistent storage (running in App Direct Mode) is NOT supported, including using it as

the IMFS disk area. Therefore, this design guide focuses mainly on how to configure

PMem modules as a persistent block-addressable storage over App-Direct mode with

Oracle Databases and places less emphasis on its performance. This is described in Use

Case 2: IMCS load performance with and without IMFS area section.

For further details on Intel Optane PMem, see the Quick Start Guide: Provision Intel

Optane Persistent Memory.

Dell EMC PowerEdge server support and best practices for Intel Optane PMem

You can deploy Dell EMC PowerEdge servers, which are based on the second generation

of Intel Xeon Scalable processors (Platinum and Gold grades), with a combination of

DRAM and Intel Optane PMem modules. With PowerEdge servers, PMem modules are

offered in three capacities: 128 GB, 256 GB, and 512 GB. PMem modules are populated

in the memory slots of the server and are compatible with RDIMMs and LRDIMMs. You

can purchase a PowerEdge server with the PMem modules in place, or you can install

them yourself, as explained in PMem population best practices.

A maximum of six PMem modules are supported within each CPU socket. The total

supported memory size per CPU socket (including DRAM and PMem capacities) depends

on the type of CPU used, as shown in the following table.

Table 1. CPU type and maximum memory limits

CPU type Maximum memory supported

(includes DRAM plus PMem capacity)

All CPU SKUs 1 TB per CPU socket

M SKUs 2 TB per CPU socket

L SKUs 4.5 TB per CPU socket

Page 12: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Dell EMC PowerEdge server support and best practices for

Intel Optane PMem

12 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Figure 4. DRAM and Intel Optane PMem population rules

We recommend the following best practices for populating PMem modules:

• As shown in the figure above, if a PMem module and a DDR4 DIMM are

populated on the same memory channel, always populate the PMem module in

Slot 1 (black tab).

• Always populate PMem modules across the integrated Memory Controllers

(iMC) and across CPU sockets first before populating them within a single iMC

or within a single CPU socket. Using the above figure as an example, we

recommend that you populate CH 4 Slot 1 before populating CH 2 Slot 1 within

the same CPU socket. This maximizes the efficient use of the memory

channels.

• If the PMem is configured in Memory Mode, the recommended DRAM to PMem

capacity ratio is 1:4 to 1:16 per iMC.

• We recommend a balanced configuration in which each memory channel is

populated with a DDR4 DIMM and a PMem module. We recommend that you

populate all channels with lesser capacity PMem modules, instead of populating

fewer channels with higher capacity PMem modules. For example, we

recommend that you populate 6 x 128 GB PMem modules (768 GB) per CPU

socket as opposed to 4 x 256 GB PMem modules (1 TB), as shown in the figure

above.

For further details on PMem support, best practices, and latest updates with

PowerEdge servers, see the Dell EMC Intel Optane PMem User's Guide.

PMem

population best

practices

Page 13: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

13 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Oracle Database In-Memory use cases and tests

The Introduction section of this design guide provides a high-level overview of the Oracle

Database In-Memory’s dual architecture. As part of this dual architecture, the purely in-

memory columnar format includes several optimizations for accelerating OLAP query

processing in the server’s main memory. In-Memory Column Store (IMCS), or the In-

Memory area’s size, is controlled by the database initialization parameter

inmemory_size.

Theoretically, the more of the OLAP schema dataset we can bring into the In-Memory

area of the SGA, the more performance benefit we can potentially achieve with the Oracle

Database In-Memory feature. Therefore, as documented in this guide, we tested the

IMCS OLAP query performance with different capacities of Intel Optane PMem in Memory

Mode, and we compared it to the query performance on different capacities of traditional

DRAM.

The Oracle Database In-Memory supports six varying IMCS compression levels or

methods for space savings and query optimizations, described briefly in the following

table.

Table 2. Oracle Database In-Memory Column Store (IMCS) compression levels

Compression level Brief description

1 NO MEMCOMPRESS The data is not compressed

2 MEMCOMPRESS FOR DML Provides best DML performance

3 MEMCOMPRESS FOR QUERY LOW Most optimized for query performance

4 MEMCOMPRESS FOR QUERY HIGH Less query optimized and more space-savings optimized than Query Low

5 MEMCOMPRESS FOR CAPACITY LOW Less query optimized and more space-savings optimized than Query High

6 MEMCOMPRESS FOR CAPACITY HIGH Least query optimized and most space-savings optimized compression method

For details on the IM compression levels, refer to Oracle Database In-Memory Guide: IM

Column Store Compression Methods.

In the rest of this guide, the four compression levels we tested is referred to as follows:

• MEMCOMPRESS FOR QUERY LOW = Query Low

• MEMCOMPRESS FOR QUERY HIGH = Query High

• MEMCOMPRESS FOR CAPACITY LOW = Capacity Low or Cap Low

• MEMCOMPRESS FOR CAPACITY HIGH = Capacity High or Cap High

The query-optimized algorithms provide options to bring less-compressed data into the In-

Memory area, but at the cost of larger memory capacity requirements. Conversely, the

space-saving algorithms provide options to bring more data in to the In-Memory area, but

at the cost of having a potential overhead on query performance. Depending on the type

of compression level used, In-Memory population is a CPU-bound and time-consuming

Overview

Page 14: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

14 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

operation, involving reformatting of data into a columnar format and compressing the data

before placing it into memory.

As described briefly in the Introduction section, IMFS allows the database to write the

compressed columnar in-memory data, called In-Memory Compression Units (IMCUs), on

to a persistent disk area—specifically on to a dedicated tablespace created just for this

purpose.

Therefore, the second Oracle Database In-Memory use case we tested evaluated the

reload time of IMCS from IMFS area when it is created on various fast-tiered persistent

disks, including PMem as a persistent block-addressable storage (App Direct Mode). We

compared that performance to the IMCS load time from source row-based tables to study

the performance benefits of the IMFS feature.

IMPORTANT: At the time of this writing, storing Oracle Database 19c datafiles on Intel Optane

PMem as persistent storage (running in App Direct Mode) is NOT supported, including using it as

IMFS disk area. We recommend that you check the Oracle website for the latest Support updates

before using PMem as a persistent device in your production environment with Oracle Databases

19c or older.

Note: At the time of this writing, Oracle Database 19c and older versions do not support the

direct-access (DAX) feature to fully take advantage of the performance of PMem modules as a

byte-addressable persistent memory device. Hence, this design guide focuses mainly on how to

configure PMem modules in App Direct Mode with Oracle Databases and places less emphasis

on its performance.

For further details on Oracle Database In-Memory and its features, see the Oracle

Database 19c Database In-Memory Guide.

The goal of this use case was to study how different capacities of traditional DRAM and

Intel Optane PMem in Memory Mode (PMem-MM), in combination with different Database

In-Memory compression levels, impact the query performance of Oracle Database In-

Memory.

In this use case, we tested the Oracle Database In-Memory query performance using two

different OLAP schema sizes—1 TB and 3 TB TPC-H-like schemas—that were generated

using the HammerDB benchmarking tool. Each OLAP schema size was tested on

different capacities of DRAM-only and PMem-MM server configurations. Also, for each

OLAP schema size, the source dataset was populated in the IMCS using multiple

Database In-Memory compression levels.

In general, the HammerDB TPC-H-like schema has eight tables. Our test methodology

goal was to populate as many TPC-H-like tables in the In-Memory area as possible.

Ideally, we would prefer to populate all eight schema tables. However, as described

previously, different IM compression levels have different impacts on the space occupied

by a single source schema table when populated inside the IMCS. As a result, this

required us to perform an initial memory sizing evaluation to determine, for each test

case, how many schema tables we can populate in the In-Memory area. To see the

details of the memory sizing methodology used in this design guide, refer to Memory

sizing.

Use Case 1:

IMCS query

performance

Page 15: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

15 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Note: The TPC-H-like schema referenced in this guide is generated using HammerDB tool and its

implementation is NOT a full specification of the TPC-H benchmark. Therefore, the query results

cannot be compared in any way with the official TPC-H benchmarks published on

http://www.tpc.org

For all test cases in Use Case 1, Oracle Database 19c (19.3.0.0) Grid Infrastructure for a

Standalone Server and Oracle Database 19c (19.3.0.0) Single Instance Database was

installed. Oracle Linux 7.7 with Unbreakable Enterprise Kernel (UEK) Release 5 Update 2

operating system was deployed on a single hardware RAID 1 virtual disk (VD) created

using the Dell H740p PowerEdge RAID Controller (PERC).

The details of 1 TB schema size test cases are described in the following section, and the

details of the 3 TB schema test cases are described in Test 1.2: 3 TB TPC-H-like or OLAP

schema In-Memory query performance.

Test 1.1: 1 TB TPC-H-like or OLAP schema IMCS query performance tests

The 1 TB OLAP schema size in this test case represents the study of Database In-

Memory query performance in a small data warehouse environment.

Figure 5. Configurations overview: Test 1.1 - 1 TB OLAP schema query performance

As shown in the figure above, in Test 1.1, we designed two R740xd server configurations

and conducted four tests on them, as described below:

Page 16: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

16 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

R740xd server1 - DRAM-only configuration

This configuration has the following features:

• 768 GB of total main memory capacity using 12 x 64 GB DDR4 DIMMs

• We performed IMCS query performance tests using two IM compression levels,

as described below:

▪ Test 1.1.1 (768 GB DRAM-only Cap High)

In this test case, we enabled the Capacity High IM compression level. 330

GB inmemory_size is the minimum size needed to populate all eight

source tables (1 TB schema) compressed with the Capacity High

compression level. With 768 GB total memory capacity in this test case, we

were able to allocate 330 GB to inmemory_size and leave the balance of

the memory space available for other components. As a result, in this test

case, we were able to populate all eight tables inside the IMCS. In the rest

of the guide, this test is referred to as ‘768 GB DRAM-only Cap High’.

▪ Test 1.1.2 (768 GB DRAM-only Query Low)

In this test case, we enabled the Query Low IM compression level. 960 GB

inmemory_size is the minimum size needed to populate all eight source

tables (1 TB schema) compressed with the Query Low compression level.

With 768 GB total memory capacity in this test case, we were able to

allocate maximum of 430 GB to inmemory_size since the balance of the

memory space was required for other memory components. As a result, in

this test case, we were able to populate six of eight source tables (29

percent of the total schema size) inside the IMCS. In the rest of the guide,

this test is referred to as ‘768 GB DRAM-only Query Low’.

R740xd server2 - PMem-MM configuration

This configuration has the following features:

• 1.5 TB of total main memory capacity using 12 x 128 GB PMem-MM modules

• With 384 GB of PMem-MM front-end DRAM-cache using 12 x 32 GB DDR4

DIMMs, this configuration has a DRAM:PMem-MM capacity ratio of 1:4.

• We performed IMCS query performance tests on the 1.5 TB total PMem-MM

capacity (with 384 GB front-end DRAM-cache) using two IM compression

levels:

▪ Test 1.1.3 (1.5 TB PMem-MM Cap High)

In this test case, we enabled the Capacity High IM compression level. 330

GB inmemory_size is the minimum size needed to populate all eight

source tables (1 TB schema) compressed with the Capacity High

compression level. With 1.5 TB total main memory capacity in this test

case, we were able to allocate 330 GB to inmemory_size and leave the

balance of the memory space available for other components. As a result,

in this test case, we were able to populate all eight tables inside the IMCS.

In the rest of the guide, this test case is referred to as ‘1.5 TB PMem-MM

Cap High’.

Page 17: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

17 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

▪ Test 1.1.4 (1.5 TB PMem-MM Query Low)

In this test case, we enabled the Query Low IM compression level. A

minimum of 960 GB inmemory_size is needed to populate all eight

source tables (1 TB schema) compressed with the Query Low compression

level. With 1.5 TB total main memory capacity in this test case, we were

able to allocate 960 GB to inmemory_size and leave the balance of the

memory space available for other components. As a result, in this test case,

we were able to populate all eight tables inside the IMCS. In the rest of the

guide, this test case is referred to as ‘1.5 TB PMem-MM Query Low’.

Note: For memory sizing methodology details, refer to Memory sizing. For details on all key

database memory parameter sizes, refer to Table 9.

All four tests in Test 1.1 were conducted on a 1 TB OLAP schema or a TPC-H-like

schema that was created on 4 x RAID1 SAS SSDs on the two server configurations, as

shown in Figure 5. The schema was created using the scale factor (SF) option of 1000 in

the HammerDB benchmarking tool. All configuration details are described in the

Appendix.

For all four test cases, using the HammerDB tool, we generated the TPC-H-like load by

simulating a single user who generated 22 sequentially run OLAP queries or a 22-query

set. In the HammerDB tool, we also set the degree of parallelism (DoP) to 56 since it is

recommended to set it to half of the total number of physical processor cores, which was

112 (includes hyper-threading cores) in our case. The graph below shows the normalized

performance results from all four tests in Test 1.1. All normalized results show the total

query execution time required to finish the 22-query set and are relative to Test 1.1.1 (768

GB DRAM-only Cap High).

Figure 6. Test 1.1: All four test case results with 1 TB OLAP schema

The following sections provide the comparative analyses for the four test cases in Test

1.1.

Results comparison 1: 768 GB DRAM-only Cap High vs 1.5 TB PMem-MM Cap High

In this section, we compared results of Test 1.1.1 and Test 1.1.3, as shown in Figure 6.

These tests demonstrate how the Capacity High IM compression level affected the IMCS

Page 18: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

18 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

query performance of both the DRAM-only and the PMem-MM configurations. As shown

in Figure 6, when we compare results of Test 1.1.1 and Test 1.1.3, PMem-MM test

finished the query execution twenty-six percent slower than the DRAM-only test when the

Capacity High compression level was applied.

The following figure shows snippets from the Oracle Automatic Workload Repository

(AWR) reports and the disk utilization graphs generated by OSWatcher tool for both Tests

1.1.1 and 1.1.3. As shown in the following figure, during both DRAM-only and PMem-MM

tests, the database performed all of the sorting in memory (In-memory Sort % = 100.00) in

the Program Global Area (PGA) private sort area. This resulted in very little I/O (1 and 1.4

percent ‘direct path to write temp’ and 1 and 0.4 percent ‘direct path to read temp’) to the

TEMP area on disk (seen as intermittent spikes for disks sdb, sdc, sdd, and sde where

TEMP resides).

Figure 7. Test 1.1.1 and Test 1.1.3: AWR snippets and disk utilization graphs

This demonstrates that when the OLAP dataset is small, all IM scans and query

executions can be accomplished inside the database’s IMCS and PGA memory

components. Therefore, all the query processing that happens in the main memory is

primarily driven by the main memory media speed. Because DRAM is a much faster

memory media, it delivers much better Database In-Memory query performance than

PMem-MM for smaller data warehouses.

Results comparison 2: 768 GB DRAM-only Cap high vs 768 GB DRAM-only Query Low vs 1.5 TB PMem-MM Query Low

In this section, we compared results of Test 1.1.2 and Test 1.1.4, as shown in Figure 6.

These tests demonstrate how the Query Low IM compression level affected the IMCS

query performance of both the DRAM-only and the PMem-MM configurations. As shown

in Figure 6, when we compare results of Test 1.1.2 and Test 1.1.4, the PMem-MM test

finished the query execution more than four times faster than the DRAM-only test when

Query Low compression level was applied.

Page 19: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

19 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

As described in Test 1.1.2 (768 GB DRAM-only Query Low), for the DRAM-only

configuration that was tested with the Query Low compression level, we could populate

only 29 percent of the total schema size inside the IMCS. As a result, the database was

unable to perform a full in-memory table scan and had to reach out to disk, resulting in

increased database wait times. Thus, overall query processing took substantially longer.

Meanwhile, as described in Test 1.1.4 (1.5 TB PMem-MM Query Low), for the PMem-MM

configuration, we were able to populate all eight tables in the IMCS. As a result, PMem-

MM test was able to accomplish all the query processing inside the main memory (IMCS

and PGA) and finish the query execution more than four times faster than in the DRAM-

only test.

Next, we compared results of Test 1.1.1 and Test 1.1.2, as shown in Figure 6. These tests

demonstrate how the Capacity High and Query Low compression levels affected the

IMCS query performance of the DRAM-only configuration. As shown in Figure 6, when we

compare results of Test 1.1.1 and Test 1.1.2, for the DRAM-only configuration, the Query

Low test finished the query execution nearly six times slower (578 normalized units) than

the Capacity High (100 normalized units) test.

This demonstrates that query-optimized IM compression level is not always the better

option. It is better to use a space savings-optimized IM compression level (Capacity High

in this case) and populate the entire source schema inside the In-Memory area. This way

all IM scans and query processing can be accomplished inside the IMCS and the PGA

memory. Otherwise, when we use a more query-optimized compression level (Query Low

in this case), a larger In-Memory area is required to populate the less-compressed data.

The data may not fit entirely in the In-Memory area and may spill over to disk instead. As

explained previously, when source data spills over to disk, full-table scans and query

processing times take significantly longer to finish because they incur significantly longer

disk latencies. Therefore, we recommend that you perform memory sizing to determine

the best IM compression level to use to achieve optimum In-Memory query performance.

For memory sizing methodology used in this guide, see Memory sizing.

Results comparison 3: 1.5 TB PMem-MM Cap High vs 1.5 TB PMem-MM Query Low

In this section, we compared the results of Test 1.1.3 and Test 1.1.4, as shown in the

following figure. These tests demonstrate how the two compression levels affected the

IMCS query performance of the PMem-MM configuration.

Figure 8. Test 1.1: All four test case results with 1 TB OLAP schema

Page 20: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

20 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

As shown in the figure above, when we compare the results of Test 1.1.3 and Test 1.1.4,

for the PMem-MM configuration, the two compression levels did not have any significant

impact (126 units vs 132 units) on the Database In-Memory query performance. As

described in Test 1.1.3 (1.5 TB PMem-MM Cap High) and Test 1.1.4 (1.5 TB PMem-MM

Query Low), for both PMem-MM compression tests, all eight tables were populated inside

the IMCS. For both tests, the AWR did not report the PGA being undersized (100 percent

in-memory sort). I/O reported to disk was minimal (1 percent or less). Therefore, with

enough memory resources, the overall dataset was not large enough for the compression

algorithms alone to make any significant impact on the overall query processing time.

This demonstrates that for smaller data warehouses (1 TB in this case), when the entire

dataset can be populated inside the IMCS for both query-optimized and capacity-savings-

optimized IM compression levels, then the former does not have any significant impact on

the Database In-Memory query performance when compared to latter.

Test 1.2: 3 TB TPC-H-like or OLAP schema In-Memory query performance tests

Typically, the cost of traditional DRAM increases significantly at the multi-terabyte

capacity ranges due to the need for more higher-capacity individual RDIMMs or

LRDIMMs. Therefore, in this set of test cases, our goal was to test the In-Memory

database query performance of a much larger data warehouse schema that would

demand multi-terabyte memory capacities and thereby justify the use of the Oracle In-

Memory Database feature. Otherwise, as we observed from the previous results, with not

enough physical memory, the penalty to reach out to the disk has a huge impact on the

In-Memory Database’s query performance and one may not be able to leverage the true

performance benefit of the Database In-Memory feature.

Page 21: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

21 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Figure 9. Configurations overview: Test 1.2 - 3 TB OLAP schema query performance

For Test 1.2, as shown in the figure above, we configured two additional R740xd server

configurations and conducted six tests across them, described as under.

R740xd server3 - DRAM-only configuration

This configuration has the following features:

• 1.5 TB of total main memory capacity using 24 x 64 GB DDR4 DIMMs

• We performed IMCS query performance tests using three IM compression

levels, as described below:

▪ Test 1.2.1 (1.5 TB DRAM-only Cap High)

In this test case, we enabled the Capacity High IM compression level. 920

GB inmemory_size is the minimum size needed to populate all eight

source tables (3 TB schema) compressed with the Capacity High

compression level. With 1.5 TB total memory capacity in this test case, we

were able to allocate 940 GB to inmemory_size and leave the balance of

the memory space available for other components. As a result, in this test

case, we were able to populate all eight tables inside the IMCS. In the rest

of the guide, this test is referred to as ‘1.5 TB DRAM-only Cap High’.

Page 22: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

22 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

▪ Test 1.2.2 (1.5 TB DRAM-only Cap Low)

In this test case, we enabled the Capacity Low IM compression level. 1,200

GB inmemory_size is the minimum size needed to populate all eight

source tables (3 TB schema) compressed with the Capacity Low

compression level. With 1.5 TB total memory capacity in this test case, we

were able to allocate maximum of 940 GB to inmemory_size since the

balance of the memory space was required for other memory components.

As a result, in this test case, we were able to populate two (LINEITEM and

PARTSUPP) of eight tables (80 percent of the total schema size) inside the

IMCS. In the rest of the guide, this test is referred to as ‘1.5 TB DRAM-only

Cap Low’.

▪ Test 1.2.3 (1.5 TB DRAM-only Query High)

In this test case, we enabled the Query High IM compression level. 1,780

GB inmemory_size is the minimum size needed to populate all eight

source tables (3 TB schema) compressed with the Query High compression

level. With 1.5 TB total memory capacity in this test case, we were able to

allocate maximum of 940 GB to inmemory_size since the balance of the

memory space was required for other memory components. As a result, in

this test case, we were able to populate only one (LINEITEM table) of eight

tables (69 percent of the total schema size) inside the IMCS. In the rest of

the guide, this test is referred to as ‘1.5 TB DRAM-only Query High’.

R740xd server4 - PMem-MM configuration

This configuration has the following features:

• 3 TB of total main memory capacity using 12 x 256 GB PMem-MM modules

• With 768 GB of PMem-MM front-end DRAM-cache using 12 x 64 GB DDR4

DIMMs, this configuration has a DRAM:PMem-MM capacity ratio of 1:4.

• We performed IMCS query performance tests on the 3 TB total PMem-MM

capacity (with 768 GB front-end DRAM-cache) using three IM compression

levels.

▪ Test 1.2.4 (3 TB PMem-MM Cap High)

In this test case, the Capacity High IM compression level was enabled. 920

GB inmemory_size is the minimum size needed to populate all eight

source tables (3 TB schema) compressed with the Capacity High

compression level. With 3 TB total memory capacity in this test case, we

were able to allocate 940 GB to inmemory_size and leave the balance of

the memory space available for other components. As a result, in this test

case, we were able to populate all eight tables inside the IMCS. In the rest

of the guide, this test is referred to as ‘3 TB PMem-MM Cap High’.

▪ Test 1.2.5 (3 TB PMem-MM Cap Low)

In this test case, the Capacity Low IM compression level was enabled.

1,200 GB inmemory_size is the minimum size needed to populate all

eight source tables (3 TB schema) compressed with the Capacity Low

Page 23: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

23 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

compression level. With 3 TB total main memory capacity in this test case,

we were able to allocate 1,300 GB to inmemory_size and leave the

balance of the memory space available for other components. As a result,

in this test case, we were able to populate all eight tables inside the IMCS.

In the rest of the guide, this test is referred to as ‘3 TB PMem-MM Cap

Low’.

▪ Test 1.2.6 (3 TB PMem-MM Query High)

In this test case, the Query High IM compression level was enabled. 1,780

GB inmemory_size is the minimum size needed to populate all eight

source tables (3 TB schema) compressed with the Query High compression

level. With 3 TB total main memory capacity in this test case, we were able

to allocate 1,780 GB to inmemory_size and have enough balance of the

memory space available for other components. As a result, in this test case,

we were able to populate all eight tables inside the IMCS. In the rest of the

guide, this test is referred to as ‘3 TB PMem-MM Query High’.

Note: For memory sizing methodology details, refer to Memory sizing. For details on all key

database memory parameter sizes, refer to Table 10.

All six tests that make up Test 1.2 were conducted on a 3 TB OLAP schema or a TPC-H-

like schema that was created on 6 x RAID1 SAS SSDs on the two server configurations,

as shown in Figure 9. The schema was created using HammerDB’s custom scripts option

and represented a scale factor (SF) of 3000. All configuration details are described in the

Appendix.

For all six tests, using the HammerDB tool, we generated the 22-query set of OLAP

queries and measured the in-memory query performance. The graph below shows the

normalized results of the total query execution time required to process the 22-query set.

All results are compared to the 1.5 TB DRAM-only Capacity High testing results.

Figure 10. Test 1.2: All six test case results with 3 TB OLAP schema

The following sections provide the different comparative analyses for the six test cases in

Test 1.2.

Page 24: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

24 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Results comparison 1: 1.5 TB DRAM-only Cap High vs 3 TB PMem-MM Cap High

In this section, we compared results of Test 1.2.1 and Test 1.2.4, as shown in Figure 10.

These tests demonstrate how the Capacity High IM compression level affected the IMCS

query performance of both the DRAM-only and the PMem-MM configurations. As seen in

Figure 10, when we compare Tests 1.2.1 and 1.2.4, PMem-MM finished the 22-query set

execution three percent slower than DRAM-only test (103 as compared to 100 normalized

units).

As described in Test 1.2.1 (1.5 TB DRAM-only Cap High) and Test 1.2.4 (3 TB PMem-MM

Cap High), during both these tests, all eight source tables were populated inside the

IMCS. The following figure shows snippets from the AWR reports and the disk utilization

graphs for both Test 1.2.1 and Test 1.2.4. Despite the much larger PGA size for the

PMem-MM test (900 GB) as compared to that for the DRAM-only test (105 GB), as shown

in the following figure, the PGA was still undersized according to the ADDM top findings.

Therefore, this 3 TB OLAP schema represents a scenario in which the entire schema fits

inside the In-Memory area, but the database still requires a significantly larger PGA

working space to accomplish all of the processing in-memory.

Figure 11. Test 1.2.1 and Test 1.2.4: AWR snippets and disk utilization graphs

Unlike the 1 TB OLAP tests, the ‘In-Memory sort %’ was not 100 percent for both tests as

shown in the figure above. This resulted in a much higher I/O percentage to the TEMP

space on disk (approximately 14 percent for temp reads and approximately 12 percent for

temp writes for both tests) to accomplish the query sorting and processing. This is also

validated by the OSWatcher disk utilization graphs which demonstrated 100 percent

(TEMP) disk (sdf) utilization for a significant period. For these reasons, the PMem-MM

configuration’s overall performance was nearly equal to the DRAM-only configuration’s

overall query performance for the Capacity High compression level tests.

This test demonstrates that as OLAP schema size increases (three terabytes and larger in

our case), I/O to TEMP space on disk increases. This is mainly because, with larger

datasets, larger memory space dedicated to the In-Memory area is required. This, in turn,

Page 25: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

25 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

may not leave sufficient headroom to allocate sufficient space to PGA memory area to

accomplish all of the query sorting in-memory. Also, I/O to TEMP space on disk adds to

significant database wait times compared to completing 100 percent of the data

processing in memory. Therefore, due to these external factors, DRAM’s individual speed

over PMem-MM becomes less of a factor in the overall Database In-Memory’s

performance. Therefore, for larger data warehouses, PMem-MM with front-end DRAM-

cache architecture can deliver overall DRAM-like In-Memory query performance.

Results comparison 2: 1.5 TB DRAM-only vs 3 TB PMem-MM: Cap Low and Query High tests

In this section, we compared results of Test 1.2.2, Test 1.2.3, Test 1.2.5, and Test 1.2.6,

as shown in the following figure. These tests demonstrate how the Capacity Low and

Query High IM compression levels affected the IMCS query performance of both the

DRAM-only and the PMem-MM configurations.

Figure 12. Test 1.2: All six test case results with 3 TB OLAP schema

As shown in the figure above, when we compare Test 1.2.2 with Test 1.2.5 and Test 1.2.3

with Test 1.2.6, during both these sets of comparison, PMem-MM (102 and 88 normalized

units) configuration finished the query execution nearly twice as fast as the DRAM-only

(196 and 203 normalized units) configuration.

These testing results reiterate some of the key observations from the 1 TB schema test

results:

• Data spilling over to disk has a significant negative impact on the query

performance of the In-Memory Database.

• Query-optimized compression levels are not always the best choice. It is better

to use space savings-optimized compression levels (IM Capacity High in this

case) and bring the entire source schema into in-memory than to use more

query-optimized compression levels (IM Capacity Low and IM Query High in this

case) that cause data spillover to disk.

These results also demonstrate that PMem modules that are available in larger capacity

sizes than DRAM DIMMs can provide an advantage over DRAM-only configurations by

accommodating larger and less-compressed datasets inside the IMCS. This, in turn, helps

to significantly improve the query performance.

Page 26: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

26 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Results comparison 3: 3 TB PMem-MM: Capacity High vs Capacity Low vs Query High

In this section, we compared results of Test 1.2.4, Test 1.2.5, and Test 1.2.6, as shown in

Figure 12. These three tests demonstrate how the three compression levels affected the

IMCS query performance of the PMem-MM configuration.

As shown in Figure 12, PMem-MM Capacity High (103 normalized units) and Capacity

Low (102 normalized units) compression tests finished the query execution nearly at the

same time. Meanwhile, the PMem-MM Query High test (88 normalized units) finished the

query execution much faster than both PMem-MM Capacity High and PMem-MM

Capacity Low compression tests.

As described in Test 1.2.4 (3 TB PMem-MM Cap High), Test 1.2.5 (3 TB PMem-MM Cap

Low), and Test 1.2.6 (3 TB PMem-MM Query High), during all three of these PMem-MM

tests, all eight source tables were populated inside the IMCS. In the AWR report, the

ADDM findings show that the PGA was undersized, with nearly equal values of 26.30,

24.88 and 28.33 during all three—Capacity High, Capacity Low and Query High—tests,

respectively. These ADDM values represent the percentage of average active sessions

affected by the undersized PGA. Hence, PGA was not a factor in the performance delta.

The following table provides the key IM statistics from the AWR reports for the three tests:

Table 3. PMem-MM Capacity High, Capacity Low, Query High: Key IM statistics

Statistic Capacity High Capacity Low Delta1 Query High Delta1

IM scan (dynamic) rows

105,282,024,771 105,279,322,875 0.00% 105,287,966,749 0.00%

IM scan (dynamic) task exec time

7,139,745,733 6,865,895,381 3.84% 5,171,779,905 27.6%

IM scan (dynamic) task

submission time

1,693,980,376 1,734,487,270 -2.39% 1,013,395,155 40.2%

1 Percentage delta compared to Capacity High value

The table above shows that the two space-savings compression levels—Capacity High

and Capacity Low—had nearly identical IM scan task times, with a delta of approximately

3 percent. However, for the query-optimized compression level, Query High, the IM scan

tasks outperformed the two space-savings compression levels. During the Query High

test, the ‘IM scan (dynamic) task submission time’ took 40.2 percent less time and the ‘IM

scan (dynamic) task exec time’ took 27.6 percent less time than the respective times

incurred by the Capacity High test. This proves that the compression level, specifically the

query-optimized compression level, made the difference in the overall query performance

between the three PMem-MM tests.

This testing demonstrates that for larger data warehouses, more query-optimized

compression algorithms can achieve faster IM scan times compared to more capacity-

savings-optimized compression algorithms. Thus, we can achieve better overall In-

Memory query performance by using more query-optimized compression levels.

Therefore, it is imperative to do memory capacity planning and component sizing to

determine most query-optimized compression level that can accommodate all source data

Page 27: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

27 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

inside the In-Memory area, while considering sufficient headroom for other Oracle and

non-Oracle memory components.

See Memory sizing section in the Appendix for sizing methodology used in this design

guide.

Results comparison 4: Best-case scenarios: 1.5 TB DRAM-only Cap High vs 3 TB PMem-MM Query High

In this section, we compared results of Test 1.2.1 and Test 1.2.6, as shown in the

following figure. These two tests demonstrate the best-case performance scenarios for

both the DRAM-only and the PMem-MM configurations—IM Capacity High for DRAM-only

and IM Query High for PMem-MM—with the 3 TB OLAP schema size.

Figure 13. Test 1.2: All six test case results with 3 TB OLAP schema

As shown in the figure above, the 3 TB PMem-MM Query High test outperformed (88

normalized units) the 1.5 TB DRAM-only Capacity High compression test (100 normalized

units) by 12 percent.

Page 28: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

28 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Figure 14. Test 1.2.1 and Test 1.2.6: AWR snippets and disk utilization graphs

The figure above shows the AWR snippets and the disk utilization graphs for both the 1.5

TB DRAM-only Capacity High and the 3 TB PMem-MM Query Low test cases. As shown

above, for both DRAM-only and PMem-MM best-case performance scenarios, ADDM

findings show that the PGA was undersized. This resulted in less than 100 percent ‘In-

memory sort %’ and caused a sizable query sort I/O percentage (13.7 to 14.4 percent for

temp reads and 12.7 and 13.9 percent for temp writes) to TEMP disk (sdf). This proves

that DRAM’s higher speed over PMem-MM and the PGA sizes did not make a difference

in performance.

Instead, in this case, the In-Memory compression level itself made the major difference in

the overall performance. This can be analyzed from the following figure that compares

some of the key In-Memory statistics of the two test cases.

Figure 15. Test 1.2.1 and Test 1.2.6: Key In-Memory statistics from AWR reports.

From the figure above, the statistics ‘IM scan CUs memcompress for capacity high’ and

‘IM scan CUs memcompress for query high’ shows the two compression levels that were

applied. The ‘IM scan CUs columns accessed’ statistic shows the number of In-Memory

columns that were accessed during the course of the entire 22-query set execution after

Page 29: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

29 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

all the In-Memory optimizations were applied. If we compare that statistic with the ‘IM

scan CUs columns decompressed’, we see that, in the DRAM-only Capacity High

compression test, 98.48 percent of the columns were decompressed to start the queries.

However, in the PMem-MM test, no In-Memory columns were decompressed (the statistic

is missing in the figure above) since the compression level applied (Query High) was

more query-optimized and the data was already in the uncompressed format. Because of

the time and CPU cycles saved, the PMem-MM statistics ‘IM scan (dynamic) task

submission time’ and ‘IM scan (dynamic) task execution time’ were 40.2 percent

(1,013,395,155) and 27.6 percent (5,171,779,905) better when compared to the

equivalent statistics (1,550,479,091 and 6,823,020,897) of the DRAM-only test case. This

proves that the more query-optimized compression level was the more relevant factor

affecting overall query performance.

This testing proves that Intel Optane PMem-MM with its front-end DRAM-cache

architecture can outperform traditional DRAM-only configurations in In-Memory Database

performance, as the size of the OLAP schema increases. It does so by providing larger

memory modules that provide larger effective memory capacities, and that enables

bringing in more less-compressed (Query High in this case) data into IMCS. Meanwhile,

for DRAM-only configurations, for this size of OLAP schema, in order to get better overall

results we would have to increase the total DRAM capacity in the system to be able to use

the more query-optimized compression techniques. And, for even bigger OLAP schemas

or data warehouses, the only option we have is to add more capacity through larger

DRAM DIMMs (potentially more expensive) or suffer the consequence of poor In-Memory

query performance due to data spilling over to disk.

Therefore, in larger data warehouse environments, to see the benefit of using the

Database In-Memory feature, larger memory capacities are required, even when using the

most space-savings-optimized IM compression levels. In such multi-terabyte OLAP

environments, PMem-MM with its larger memory capacities can help not only to populate

larger datasets but also help to bring in less compressed source schemas inside the in-

memory area. Therefore, due to this capacity advantage over DRAM-only configurations,

for larger data warehouse environments, PMem-MM with its front-end DRAM-cache

architecture can deliver DRAM-like or better In-Memory query performance.

As described briefly in the Introduction section, when we enable the In-Memory FastStart

(IMFS) area, on subsequent database restarts, IMCS data is populated from the IMFS

area rather than from the source row-based tables. Since the In-Memory Compression

Units (IMCUs) on the IMFS disk area are already compressed and in the columnar format,

this can greatly reduce the reload time and the amount of CPU consumed during the

IMCS data reload process.

Therefore, to study the advantages of the IMFS feature, the goal of this use case was to

capture and compare the load times and the server’s CPU utilization:

• When IMFS area is enabled—IMCS data is loaded from the IMFS disk area

configured on different fast-tiered persistent storages, and

• When IMFS area is not enabled—IMCS data is loaded from the source row-

based tables (baseline configuration)

Use Case 2:

IMCS load

performance

with and without

IMFS area

Page 30: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

30 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Configuration overview

The FastStart area is a designated tablespace where In-Memory objects or the In-Memory

Compression Units (IMCUs) are stored. As shown in the configuration overview following

figure, we created the IMFS tablespace on three different fast-tiered persistent storages:

• SAS-SSDs

• NVMe-PCIe-SSDs

• Intel Optane DCPMM as a persistent block-addressable storage over App-

Direct mode

Note: In the remainder of this guide, Intel Optane PMem configured as a persistent block-

addressable storage over App Direct Mode is referred to as PMem-AD.

Figure 16. In-Memory FastStart load performance configuration overview

Important: At the time of this writing, storing Oracle Database 19c datafiles on Intel Optane

PMem as persistent device (running in App Direct Mode) is NOT supported, including using it as

IMFS disk area. It is highly recommended to check the Oracle website for latest Support updates

before using PMem as a persistent device in your production environment with Oracle Databases

19c or older.

Page 31: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

31 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

To test the IMFS-to-IMCS load performance from SAS-SSDs, we created a new IMFS

tablespace on the existing +DATA disk group (DATA_DG) on SAS-SSDs. This DATA_DG

that also contains the tablespace for the 1 TB TPC-H-like source schema that was

originally created on 4 x RAID1 local SAS-SSDs.

To test the IMFS-to-IMCS load performance from NVMe-PCIe-SSDs, we created an IMFS

tablespace on a new NVMe disk group called +NVMe_DG. Since we do not have a PERC

hardware RAID option for NVMe SSDs, for high availability (HA) and performance, we

created this disk group using ASM’s ‘normal’ redundancy (two-way mirroring) across four

local NVMe-PCIe-SSDs.

To test the IMFS-to-IMCS load performance from PMem-AD, we created an IMFS

tablespace on a new PMem-AD disk group called +DCPMM_DG. Since we do not have a

PERC hardware RAID option for PMem modules configured as a persistent device

running in App Direct Mode, for high availability (HA) and performance, we created this

disk group using ASM’s ‘normal’ (two-way mirroring) redundancy across the twelve non-

interleaved PMem devices or regions, as shown in the following figure:

Figure 17. Oracle ASM Disk group design with PMem modules running in App Direct Mode

As shown in the figure above, when PMem modules are configured in App Direct and

non-interleaved mode in the UEFI BIOS, each PMem module is seen by the operating

system as a separate region or device.

Note: At the time of this writing, Oracle Database 19c and older does not support the direct-

access (DAX) feature to fully take advantage of the performance of PMem as a byte-addressable

persistent memory device. Hence, this design guide focuses mainly on how to configure PMem as

a persistent block-addressable storage over App Direct Mode with Oracle Databases and places

less emphasis on its performance.

As mentioned in the note above, since we cannot yet test PMem as a byte-addressable

persistent memory device with Oracle Database 19c, we created namespaces or

persistent storage that is block-addressable on the PMem regions using legacy mode;

that is, the ‘sector’ namespace mode. The following command shows an example of how

to create a persistent storage block device or a namespace on a PMem region 0 in

‘sector’ mode:

Page 32: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

32 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Note: DCPMM keyword used in examples of this use case refers to PMem.

[root@r740xd~]# ndctl create-namespace -r region0 -n DCPMM-s1a7 -m sector

{

"dev":"namespace0.0",

"mode":"sector",

"size":270317875200,

"uuid":"2008bf49-b244-45d9-8595-598a509904f0",

"sector_size":4096,

"blockdev":"pmem0s",

"name":"DCPMM-s1a7"

},

DCPMM-s1a7 above is a user-friendly name which indicates that this device is located at

CPU Socket 1 (s1) and in memory slot A7 (a7). User-friendly naming helps to identify the

PMem module’s physical location and to configure it appropriately as shown in the ASM

disk group design in Figure 17. The above command creates a PMem persistent storage

block device with a default sector size of 4,096 (4 K) and is enumerated as

‘/dev/pmem0s’ within the operating system.

Note: Do not create any partitions on the PMem-AD block devices. ASM disk group creation fails

if the PMem-AD devices created in ‘sector’ mode have any partitions on them.

Next, we set the required Oracle ownership and permission on the PMem-AD block

device(s) in the udev rules file /etc/udev/rules.d/60-oracle-

asmdevices.rules, as shown below (trimmed output):

# PMem located in CPU Socket 1, memory slot A7, assigned to PMem ASM

Failure group 1

KERNEL=="pmem*", SUBSYSTEM=="block", ATTRS{uuid}=="2008bf49-b244-45d9-

8595-598a509904f0", SYMLINK+="oracleasm/disks/ora-pmem00-disk1-1",

OWNER="grid", GROUP="asmadmin", MODE="0660"

# PMem located in CPU Socket 2, memory slot B7, assigned to PMem ASM

Failure group 2

KERNEL=="pmem*", SUBSYSTEM=="block", ATTRS{uuid}=="9a4e964d-9975-4500-

a7da-ba75ce15c286", SYMLINK+="oracleasm/disks/ora-pmem06-disk1-2",

OWNER="grid", GROUP="asmadmin", MODE="0660"

You can obtain the unique uuid for the PMem-AD devices seen above by using the

following command (trimmed output):

[root@r740xd~]# ndctl list -N | grep -e "uuid\|blockdev\|name"

"uuid":"2008bf49-b244-45d9-8595-598a509904f0",

"blockdev":"pmem0s",

"name":"DCPMM-s1a7", CPU Socket 1, Memory Slot A7

"uuid":"9a4e964d-9975-4500-a7da-ba75ce15c286",

"blockdev":"pmem6s",

"name":"DCPMM-s2b7", CPU Socket 2, Memory Slot B7

Page 33: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

33 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Once we apply the udev rules using the ‘# udevadm trigger’ command, the required

ownership, permission, and SYMLINKs are created appropriately, as shown below

(trimmed output):

[root@r740xd]# ll /dev/pmem*

brw-rw----. 1 grid asmadmin 259, 8 Apr 18 04:55 /dev/pmem0s

brw-rw----. 1 grid asmadmin 259, 18 Apr 18 04:55 /dev/pmem6s

[root@r740xd]# ll /dev/oracleasm/disks/*

lrwxrwxrwx. 1 root root 13 Apr 18 04:55 /dev/oracleasm/disks/ora-pmem00-

disk1-1 -> ../../pmem0s

lrwxrwxrwx. 1 root root 13 Apr 18 04:55 /dev/oracleasm/disks/ora-pmem06-

disk1-2 -> ../../pmem6s

The PMem-AD ‘normal’ redundancy ASM disk group creation with two failure groups

screenshot is shown in the following figure.

Figure 18. PMem-AD ‘normal’ redundancy disk group creation using ASMCA

Even though the PMem-AD ‘sector’ mode has a default sector size of 4 K, during the

ASMCA disk group creation, we do not have to explicitly set the sector and logical sector

sizes to 4 K. Oracle automatically detects these and sets them appropriately, as shown in

the screenshot below:

Figure 19. PMem-AD ASM disk group sector sizes using ‘sector’ namespace devices

Page 34: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

34 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

As described briefly in the Intel Optane PMem operating modes section, ‘sector’ mode

helps to mitigate the ‘torn writes’ atomicity issue by using the Block Translation Table

(BTT) driver module on top of the underlying PMem driver module. However, it adds an

extra layer and overhead to the block storage I/O path for the application. Hence, just for

the purpose of performance study, we also tested and configured the PMem-AD in ‘raw’

namespace mode. All the configuration steps provided for PMem-AD configured in ‘sector’

mode apply to ‘raw’ mode as well, except that when we create the namespaces, we use

the ‘-m raw’ option. The block devices are enumerated like the ‘sector’ mode devices

except with no ‘s’ in the end – for example ‘/dev/pmem0’. Also, the default physical and

logical sector sizes of the ‘raw’ block devices are 512. The following ASMCMD screenshot

that shows the 512 sector sizes for the ‘DCPMM_DG’ configured with ‘raw’ PMem-AD

devices confirms the fact that Oracle automatically detects and configures the ASM disk

group sector sizes:

Figure 20. PMem-AD ASM disk group sector sizes using ‘raw’ namespace devices

Test methodology

To test this use case, we created a 1 TB TPC-H-like source schema on 4 x PERC RAID1

SAS-SSDs. All eight TPC-H-like source tables were enabled with INMEMORY PRIORITY

CRITICAL (to load instantly) and CAPACITY HIGH compression level. We chose this

compression level because it is the most CPU-intensive and takes the longest time to

populate into In-Memory Column Store (IMCS). The database server had 768 GB of

DRAM and the inmemory_size was set to 420 GB. During the baseline test, we

populated these row-based tables from SAS-SSDs into IMCS and captured the population

time and the server CPU utilization during the load operation.

Next, to test the IMFS feature, we created a 1 TB dedicated tablespace as the FastStart

area on each of the following fast-tiered storage ASM disk groups (DG):

• +DATA (SAS-SSDs)

• +NVMe_DG

• +DCPMM_DG (sector)

• +DCPMM_DG (raw)

Oracle Database auto-manages the process of populating the In-Memory Compression

Units (IMCUs) from IMCS to the FastStart tablespace. Enabling tablespace as FastStart

area is not enough for the database to write the compressed and columnar IMCUs into

the FastStart area. Writing to IMFS area requires data population or repopulation into the

IMCS. Therefore, during each IMFS test case, we first enabled the FastStart tablespace

using the DBMS_INMEMORY_ADMIN.FASTSTART_ENABLE procedure and (re)populated the

IMCS data from the row-tables for the Database to trigger the loading to the FastStart

area. For each fast-tiered storage type, once its dedicated FastStart tablespace was fully

populated with all the IMCUs, we restarted the database to force the repopulation of the

IMCS from the IMFS disk area. During this IMFS-to-IMCS population, we captured the

Page 35: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

35 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

load time and the server CPU utilization and compared it to the baseline test case to study

the performance benefits of the IMFS feature.

All configuration details are described in the Appendix.

Performance results

The following figure shows the time required for the IMCS to populate from the row-based

source tables residing on SAS-SSDs (baseline configuration) versus the time required to

populate the same dataset from the FastStart tablespaces residing on SAS-SSDs, NVMe-

PCIe-SSDs, DCPMM-AD in ‘sector’ namespace mode, and DCPMM-AD in ‘raw’

namespace mode.

Figure 21. IMCS population: Duration - Baseline vs IMFS on fast-tiered storage

Test 2.1: IMCS load duration: Baseline vs IMFS on fast-tiered storage

The results in the figure above clearly show the advantage of the IMFS feature. The IMCS

populated nearly four times faster from SAS-SSDs (26 normalized units) and nearly five

times faster from NVMe PCIe SSDs and PMem-AD (raw and sector) FastStart areas (22

normalized units) than it populated from the original source row-tables (100 units) residing

on the SAS-SSDs.

Key observation: The In-Memory FastStart (IMFS) feature can provide a significant

boost in the load time of the IMCS data whenever the database needs to restart.

Test 2.2: IMFS-to-IMCS population: SAS-SSDs vs NVMe-PCIe-SSDs vs PMem-AD

We did not see much difference in the load times (26 to 22 normalized units) when we

compare the performance between the different fast-tiered storage types. This is because

before loading from the IMFS area, the database validates the IMCUs against the source

schema (which is on SAS-SSDs in this case) for any modifications to ensure the

transactional consistency. If it detects no modifications, then it reuses and loads the

IMCUs directly from the FastStart area in to IMCS; otherwise IMCS will be loaded from

the source row-based schema. Therefore, the speed or latency of the individual fast-tiered

storage was not a big factor since the performance was dependent on the reads from the

source schema residing on SAS-SSDs.

The following figure, which uses the NVMe IMFS test case as an example, shows this

validation against the source schema that resides on 4 x RAID1 SAS-SSD disks (sdb,

Page 36: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

36 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

sdc, sdd, and sde are 100 percent busy) and reveals why each fast-tiered storage disk

type had nearly the same load performance:

Figure 22. Disk utilization: IMCS load from IMFS on NVMe-PCIe-SSDs

Key observation: The overall load performance of IMCS from IMFS area is dependent on

the type of storage disk on which the source data resides. This is because the database

does a validation of the IMFS data against the source data before it populates it into

IMCS. Therefore, for the IMFS disk area, choose the same disk type as the disk type on

which the source database will be placed.

Test 2.3: IMCS population: CPU Utilization - Baseline vs IMFS on fast-tiered storage

Next, we compared the database server’s CPU utilization during the IMCS load

operations happening from the source-row tables on SAS-SSDs (baseline) with that from

the FastStart area on the different fast-tiered storage disks. The following figure shows the

CPU utilization results:

Figure 23. IMCS population: CPU Utilization - Baseline vs IMFS on fast-tiered storage

As shown in the figure above, the IMFS feature helps to significantly reduce the database

server’s CPU utilization during the IMCS load operation since the IMCUs are already

compressed and in columnar format. Loading the IMCS from IMFS area on fast-tiered

storage disks helped to reduce the CPU utilization by up to ten times when compared to

loading the IMCS from source row-based schema.

Key observation: The IMFS feature can help to significantly reduce the CPU usage

during the IMCS load operation.

The load duration and the CPU utilization results from Tests 2.1 and 2.3, respectively,

validate the point that without IMFS disk area, converting the row-tables into columnar

format and compressing them each time the database (re)starts is a very CPU-intensive

Page 37: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Oracle Database In-Memory use cases and tests

37 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

and time-consuming process, especially with larger data warehouses. Hence, in general,

we recommend that you reserve additional disk space (twice the size of inmemory_size

is recommended) for the IMFS area, ideally, on a storage disk type that is the same as the

disk type on which the source database resides, when using the In-Memory Database

feature.

Page 38: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

References

38 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

References

The following Dell Technologies documentation provides additional and relevant

information. Access to these documents depends on your login credentials. If you do not

have access to a document, contact your Dell Technologies representative.

• How to deploy Oracle 12c Release 2 standalone Database on RHEL 7.x

• Dell EMC Intel Optane PMem User's Guide

• NVMe on RHEL7

• PowerEdge R740xd Rack Server

• Dell Technologies Solutions for Oracle

The following Intel documentation provides additional and relevant information.

• Intel Optane persistent memory: Quick Start Guide

• Quick Start Guide: Provision Intel® Optane™ persistent memory

The following Oracle documentation provides additional and relevant information.

• Blog: Persistent Memory and Oracle Linux

• Blog: Persistent Memory Primer

• Oracle Database In-Memory with Oracle Database 19c: Technical Overview

• Oracle Database 19c Database In-Memory Guide

• Oracle Database 19c Installation Guide for Linux

• File Systems and Devices on Persistent Memory (PMEM) in Database Servers

May Cause Database Corruption (Doc ID 2608116.1)

The following pmem.io documentation provides additional and relevant information.

• Persistent Memory Documentation

• Persistent Memory Programming: Documentation for ndctl and daxctl

• Intel Optane PMem Management Utility: IPMCTL User Guide

The following HammerDB documentation provides additional and relevant information.

• HammerDB Documentation

• HammerDB Benchmarking toolkit download link

Dell

Technologies

documentation

Intel

documentation

Oracle

documentation

pmem.io open

source project

HammerDB

documentation

Page 39: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

39 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Appendix

Table 4. PowerEdge server component details

Component Description

Database server chassis R740xd with up to 24 x 2.5” HDD including max of 12 NVMe drives

Processor 2 x Intel Xeon Platinum 8280M 2.7GHz, 28c/56T, 10.4GT/s, Turbo

DRAM - 384 GB [12 x 32 GB RDIMMs 2933 MT/s DR]

- 768 GB [12 x 64 GB RDIMMs 2933 MT/s DR]

- 1.5 TB [24 x 64 GB RDIMMs 2933 MT/s DR]

Intel Optane PMem [1:4 DRAM: PMem ratio]

- 1.5 TB [12 x 128 GB] + 384 GB DRAM (cache)

- 3 TB [12 x 256 GB] + 768 GB DRAM (cache)

FW: 01.02.00.5395

RAID controller PERC H740P Adapter FW: 50.9.2-2938

Local disks

Operating system 2 x 1.2 TB TOSHIBA 10K RPM SAS 12Gbps 512n 2.5” HDDs [1 x RAID1 Virtual Disk (VD)]

FW: EF05

1 TB TPC-H-like schema (DATA)

8 x 960 GB TOSHIBA SAS SSDs Mixed Use 12Gbps 512e 2.5” PM5-V drive, 3 DWPE, 5256 TBW [4 x RAID1 VDs]

FW: B01C

3 TB TPC-H-like schema (DATA)

12 x 960 GB TOSHIBA SAS SSDs Mixed Use 12Gbps 512e 2.5” PM5-V drive, 3 DWPE, 5256

TBW [6 x RAID1 VDs]

FW: B01C

REDO 1 and REDO 2 4 x 1.92 TB TOSHIBA SAS-SSDs Mixed Use 12Gbps 512e 2.5” PM5-V drive, 3 DWPD, 10512 TBW [2 x RAID1 VDs]

FW: B01C

NVMe PCIe SSDs [IMFS disk area]

4 x 3.2 TB, Intel NVMe, Mixed Use, 2.5” SFF drives, U.2, P4610

FW: VDV1DP21

The settings provided in the following table were applied during all use cases and test

cases.

Table 5. Operating system settings

Parameter / component Value

Operating system Oracle Linux 7.7 with the Unbreakable Enterprise Kernel (UEK) Release 5 Update 2 [4.14.35-1902.0.18.el7uek.x86_64]

I/O scheduler deadline* (default operating system setting)

PowerEdge

server

component

details

Operating

system settings

Page 40: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

40 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Parameter / component Value

- NUMA & Transparent HugePages settings: /etc/default/grub

- /etc/sysconfig/network

- User Limits settings: /etc/security/limits.d/oracle-

database-preinstall-19c.conf

- Kernel parameters: /etc/sysctl.d/99-oracle-database-preinstall-19c-

sysctl.conf

All values as set by Oracle Linux Oracle Database 19c pre-install RPM [oracle-database-

preinstall-19c-1.0-1.el7.x86_64.rpm]

The following table shows the vm.nr_hugepages settings applied to each test case in the

/etc/sysctl.d/99-oracle-database-preinstall-19c-sysctl.conf file. The

values were determined and set using Oracle’s hugepages_settings.sh script that

can be found at My Oracle Support note 401749.1.

Table 6. HugePages settings applied to each use case 1 and use case 2 test cases

Use Case 1: 1 TB OLAP Use Case 1: 3 TB OLAP Use Case 2

Compression 768 GB DRAM-only

1.5 TB PMem-MM

1.5 TB DRAM-only

3 TB PMem-MM

All Tests

Capacity High 250,887

(Test 1.1.1)

250,887

(Test 1.1.3)

614,408

(Test 1.2.1)

456,975

(Test 1.2.4)

250,887

Capacity Low N/A N/A 614,408

(Test 1.2.2)

456,975

(Test 1.2.5)

N/A

Query High N/A N/A 614,408

(Test 1.2.3)

456,975

(Test 1.2.6)

N/A

Query Low 250,887

(Test 1.1.2)

614,407

(Test 1.1.4)

N/A N/A N/A

For the different combinations of physical memory sizes, schema sizes and IM

compression levels we tested, our goal for each test case was to populate as many TPC-

H-like schema tables inside the In-Memory area as possible. Ideally, we would prefer to

populate all eight schema tables generated by the HammerDB TPC-H-like schema inside

the In-Memory area. However, for a given schema size, the different In-Memory

compression levels have different impacts on the space occupied by any particular

schema table inside the In-Memory area. Hence, the following steps show the

methodology used to perform the memory sizing for all the test cases described in this

guide.

1. For the two schema sizes we tested with—1 TB and 3 TB TPC-H-like schemas—

during the trial runs, we first determined the minimum inmemory_size

parameter value needed in order to populate the entire schema (all eight tables)

inside the In-Memory area. The values determined are listed in the following

figure. From the onset, this step helps us to determine which IM compression

level will and will not populate the entire schema size inside the In-Memory area

for a given physical memory size.

Memory sizing

Page 41: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

41 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Figure 24. Minimum inmemory_size needed to populate entire schema In-Memory

2. In addition to the inmemory_size, however, we still need to provision other

Oracle memory components (PGA and other components of SGA) and non-

Oracle components (operating system) within the available physical memory

space. Hence, for a given physical memory capacity, we next assigned

percentage ranges for all the memory components that are practical and

acceptable in an Oracle Database environment, as shown below.

• Physical memory capacity (100 percent)

o Non-Oracle components (operating system): 5 to 15 percent of total

physical memory size

o Total SGA and PGA: 85 to 95 percent of total physical memory size

▪ PGA Limit: 30 to 50 percent of total SGA and PGA size

• PGA Target: 50 percent of PGA Limit size

▪ SGA Target and Max: 50 to 70 percent of total SGA and PGA size

• In-Memory area: 75 to 85 percent of SGA Target size

• Other SGA (buffer cache): 15 to 25 percent of SGA Target size

3. Therefore, starting with the minimum inmemory_size from step 1 and adding

sizes for all other memory components on top using step 2, for each test case, we

were able to do the memory sizing and determine the minimum and maximum

values for each memory component for a given physical memory capacity. For

each test case, once we were able to determine the max size we could allocate

for inmemory_size, we could then determine the maximum number of source

schema tables we could populate inside the In-Memory area. As an example, the

following figure shows the memory sizing done during Test 1.1.1 that had 768 GB

DRAM-only capacity and Capacity High IM compression level was used to

populate the 1 TB TPC-H-like schema inside the In-Memory area.

Page 42: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

42 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Figure 25. Memory sizing for Test 1.1.1 with 768 GB DRAM-only, Capacity High IM compression, and 1 TB OLAP schema

As seen in the figure above, during Test 1.1.1, we were able to assign 330 GB to the In-

Memory Area which still allowed enough headroom (768 GB - 330 GB = 438 GB) to

allocate reasonable sizes to other memory components using the percentage-ranges

methodology provided in step 2 above. Within the 330 GB (minimum size required as

shown in Figure 24) we were able to populate the entire schema inside IMCS for Test

1.1.1.

Using the above memory sizing methodology, Table 9, Table 10 and Table 11 in the

Oracle Database parameter settings section provides the different values set to the key

memory parameters for both the use cases tested in this design guide.

Table 7. Oracle ASM disk group settings

Use case (UC)

Local disks

Oracle ASM

Disk group

name

Redundancy Datafiles, tablespace (TS)

Striping

UC1 - all 1TB test cases

4 x RAID1 960 GB SAS-SSDs

+DATA External Data, TEMP Course

UC2 - Baseline

UC2 - IMFS -SAS-SSDs

Data, TEMP, IMFS_SSD_TS

UC2 - IMFS - NVMe

4 x 3.2 TB NVMe PCIe SSDs

+NVMe_DG Normal IMFS_NVMe_TS Course

UC2 - IMFS - PMem

12 x 256 GB DCPMMs

+DCPMM_DG Normal IMFS_DCPMM_TS Course

UC1 - all 3TB test cases

6 x RAID1 SAS-SSDs

+DATA External Data, TEMP (on +REDO1)

Course

All use cases 1 x R1 SAS-SSDs

+REDO1 External Redo Log Groups 1 and 3

Fine

Oracle ASM disk

group settings

Page 43: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

43 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

Use case (UC)

Local disks

Oracle ASM

Disk group

name

Redundancy Datafiles, tablespace (TS)

Striping

All use cases 1 x R1 SAS-SSDs

+REDO2 External Redo Log Groups 2 and 4

Fine

We installed Oracle Database 19c (19.3.0.0) Grid Infrastructure for a Standalone Server

and Oracle Database 19c (19.3.0.0) Single Instance Database.

The following table shows the database settings applied to both Use Case 1 and 2.

Table 8. Oracle Database parameter settings applied to both Use Case 1 and 2

Parameter / component Value (*Default database settings)

- db_32k_cache_size

- db_file_multiblock_read_count

- use_large_pages

- filesystemio_options

- audit_trail

- processes

- transactions

- open_cursors

- sessions

- session_cached_cursors

- resource_manager_plan

- parallel_degree_policy

- 32 GB

- 4

- TRUE*

- setall

- none

- 8960*

- 14907*

- 8000

- 13552*

- 50*

- <empty/null>*

- AUTO

The following table shows the key database memory settings applied for each of the Use

Case 1 - Test 1.1 test cases.

Table 9. Oracle Database memory parameter settings applied to Use Case 1 - Test 1.1 test cases

Test 1.1 - 1 TB TPC-H-like schema (1000 scale factor)

768 GB DRAM-only 1.5 TB PMem-MM

Database parameter Test 1.1.1

Cap High

Test 1.1.2

Query Low

Test 1.1.3

Cap High

Test 1.1.4

Query Low

inmemory_size 330 GB 430 GB 330 GB 960 GB

sga_target 420 GB 490 GB 420 GB 1,200 GB

sga_max_size 490 GB 500 GB 490 GB 1,200 GB

pga_aggregrate_target 125 GB 64 GB 125 GB 140 GB

pga_aggregrate_limit 250 GB 128 GB 250 GB 280 GB

The following table shows the database memory settings applied for each of the Use

Case 1 - Test 1.2 test cases.

Oracle Database

parameter

settings

Page 44: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

44 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c Design Guide

Table 10. Oracle Database memory parameter settings applied to Use Case 1 - Test 1.2 test cases

Test 1.2 - 3 TB TPC-H-like schema (3000 scale factor)

1.5 TB DRAM-only 3 TB PMem-MM

Database parameter Test 1.2.1

Cap

High

Test 1.2.2

Cap Low

Test 1.2.3

Query High

Test 1.2.4

Cap

High

Test 1.2.5 Cap Low

Test 1.2.6 Query High

inmemory_size 920 GB 940 GB 940 GB 940 GB 1,300 GB

1,780 GB

sga_target 1,150 GB

1,200 GB 1,200 GB 1,200 GB

1,500 GB

1,990 GB

sga_max_size 1,200 GB

1,200 GB 1,200 GB 1,200 GB

1,550 GB

2,200 GB

pga_aggregrate_target 105 GB 105 GB 105 GB 900 GB 600 GB 400 GB

pga_aggregrate_limit 210 GB 210 GB 210 GB 1,800 GB

1,200 GB

800 GB

The following table shows the database settings applied to all of the test cases in Use

Case 2.

Table 11. Oracle Database parameter settings as applied to all test cases in Use Case 2

Database component Use Case 2: IMFS

Parameters Compression Value/Size

inmemory_size Capacity High 420 GB

sga_target Capacity High 490 GB

sga_max_size Capacity High 490 GB

pga_aggregrate_target Capacity High 64 GB

pga_aggregrate_limit Capacity High 128 GB

IMFS tablespaces N/A 1 TB

Page 45: New Oracle Database In-Memory Use Cases with Intel Optane … · 2020. 6. 18. · Executive Summary 4 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem)

Appendix

45 Oracle Database In-Memory Use Cases with Intel Optane Persistent Memory (PMem) Using Dell EMC PowerEdge R740xd, Oracle Linux 7.7, and Oracle Database 19c

Design Guide

We used the benchmarking tool version HammerDB-3.3-Linux.tar.gz for testing.

The following table shows the HammerDB schema build options selected for the different

use cases.

Table 12. HammerDB settings: TPC-H-like schema creation

Driver script options

Use cases HammerDB build option

Use Case 1 - 1 TB schema Scale Factor 1000

Use Case 1 - 3 TB schema Scale Factor 3000*

Use Case 2 - 1 TB schema Scale Factor 1000

* HammerDB GUI option does not support this. It was built manually using the

SQL*Loader utility (sqlldr) and manually generated TPC-H control files.

The following table shows the HammerDB benchmarking tool settings that we used during

the Use Case 1 query performance load test runs.

Table 13. HammerDB settings: Query performance load test runs

Driver script options Virtual user options

Parameter Setting Parameter Setting

TimesTen DB Compatible unchecked Virtual Users 1

Total Query Sets per User 1 User Delay (ms) 500

Exit on Oracle error unchecked Repeat Delay (ms) 500

Verbose output unchecked Iterations 1

Degree of Parallelism 56* Show Output checked

Refresh Function unchecked Log Output to Temp checked*

Number of Update Sets 1 Use Unique Log Name checked*

Trickle Refresh Delay (ms) 1000 No Log Buffer unchecked

Refresh Verbose unchecked Log Timestamps unchecked

Cloud Analytic Queries unchecked

* These settings were manually changed. Other settings listed are the default values.

HammerDB

settings