Top Banner
A comparison of storage performance EMC VNX 5100 vs. IBM DS5300 2011
20

Perf EMC VNX5100 vs IBM DS5300 Eng

May 22, 2015

Download

Documents

Oleg Korol

Сравнение_производительности_СХД_EMC_VNX5100_vs._IBM_DS5300
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Perf EMC VNX5100 vs IBM DS5300 Eng

A comparison of storage performance

EMC VNX 5100 vs. IBM DS5300

2011

Page 2: Perf EMC VNX5100 vs IBM DS5300 Eng

A comparison of storage performance EMC VNX 5100 vs. IBM DS5300

1. Configuration of test data storage systems:

EMC VNX 5100 4GB RAM per controller 15K RPM 3.5” 600GB IBM DS5300 8GB RAM per controller 15K RPM 3.5” 300GB

I hasten to note that EMC VNX 5100 is the lowest model of EMC's midrange, and the IBM DS5300 is the oldest model in the IBM midrange (but already old and will

be replaced in the product line - IBM Storwize V7000). Among the advantages of IBM DS5300 in the tested configuration is much more memory - 8GB vs. 4GB in EMC VNX 5100. Besides, IBM DS5300 almost all of this

memory is used for the cache, while at EMC VNX 5100 more than ¾ of memory used specifically for the needs of FLARE operating system and only the remaining small area available for the cache.

Another important difference - from the host with the AIX operating system for a logical drive: IBM DS5300 supports a maximum queue_depth = 256 EMC VNX 5100 supports maximum queue_depth = 64

On the other hand, since for both systems will be used is identical to the volume of the test area (~ 1.2TB), and the capacity of the drives installed in the tested EMC VNX 5100 is twice as high than the IBM DS 5300 - the results of performance EMC VNX 5100 will have a positive impact so-called short stroking* effect .

* short stroking - reducing the access time from HDD, due to the shorter moving magnetic heads, when used for storage only of the available space on the HDD

Comparison of the performance will be held on the following types of raid groups: R10, R5 and R6. As the load profile will be used 100% Random with a ratio of R/W = 80/20 (typical load for the database) and the linear read/write (typical load for backup/restore).

2. When comparing the performance of the raid group R10 was used the following configuration:

EMC VNX 5100 one raid group R10 (4+4) IBM DS5300 one raid group R10 (4+4)

Page 3: Perf EMC VNX5100 vs IBM DS5300 Eng

0

500

1000

1500

2000

2500

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 (8x15K 3.5" HDD) cache 128MBRead 673MBWrite

EMC VNX 5100 (8x15K 3.5" HDD) cache 476MBRead 325MBWrite (default)

IBM DS5300 (8x15K 3.5" HDD)

R1064 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latency, ms

block size

EMC VNX 5100 (8x15K 3.5" HDD) cache 128MBRead 673MBWrite

EMC VNX 5100 (8x15K 3.5" HDD) cache 476MBRead 325MBWrite (default)

IBM DS5300 (8x15K 3.5" HDD)

R1064 I/O treads100%Random

R/W=80/20

2.1. Results of testing with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.2.1a. The corresponding response time of the disk subsystem at the time of testing shown on diagr.2.1b.

For EMC VNX 5100 were tested two versions of the cache settings:

- default - 476MB cache to write and 325MB read cache (for each SP)

- optimized version - 673MB cache to write and 128MB read cache (for each SP)

The diagrams presented above shows that the optimized version in comparison with the default settings significantly increases performance. Therefore, all subsequent tests for EMC VNX 5100 to use this version of the configuration cache.

IBM DS5300 does not allow such manipulation of the cache settings - so a similar optimization could be carried out.

Page 4: Perf EMC VNX5100 vs IBM DS5300 Eng

0

500

1000

1500

2000

2500

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R10512 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R10512 I/O treads100%Random

R/W=80/20

2.2. The results of testing at a load of 512 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in

diagr.2.2A. The corresponding response time of the disk subsystem at the time of testing shown on diagr.2.2B.

As can be seen from the above chart, the increase in the number of I/O processes from 64 to 512 does not increase the productivity of any of the storage system - the effect is extremely small number of HDD, used in testing. Therefore, further restrict the maximum 64 I/O processes for testing.

Page 5: Perf EMC VNX5100 vs IBM DS5300 Eng

0

100

200

300

400

500

600

700

800

1 I/O 2 I/O 4 I/O 8 I/O

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R10sequentialread

0

100

200

300

400

500

600

700

800

1 I/O 2 I/O 4 I/O 8 I/O

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R10sequential write

2.3. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.2.3A. The results of testing under load with 1/2/4/8 I/O processes on linear recording shown in diagr.2.3B.

In my opinion, the chart 2.3B quite clearly show differences in the capacity of back-end interface used in data storage (using two such interfaces on both the Beck-end storage offset by write penalty = 2 for R10): - 4Gbps FC at IBM DS5300 - 6Gbps SAS at EMC VNX 5100

Page 6: Perf EMC VNX5100 vs IBM DS5300 Eng

3. When comparing the performance of the raid groups R5 used the following configuration raid groups:

EMC VNX 5100 one raid group R5 (7+1) IBM DS5300 one raid group R5 (7+1)

3.1. Results of testing with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.3.1A. The corresponding response time of the disk subsystem at the time of testing shown on diagr.3.1B.

0

500

1000

1500

2000

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R564 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R564 I/O treads100%Random

R/W=80/20

Page 7: Perf EMC VNX5100 vs IBM DS5300 Eng

3.2. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.3.2A. The results of testing under load with 1/2/4/8 I/O processes on linear recording shown in diagr.3.2.B. Both storage systems using R5, expected better results demonstrate the sequential read/write than when using R10.

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R5sequentialRead

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R5sequentialWrite

Page 8: Perf EMC VNX5100 vs IBM DS5300 Eng

4. When comparing the performance of the raid groups R6 used the following configuration raid-groups:

EMC VNX 5100 one raid group R6 (6+2) IBM DS5300 one raid group R6 (6+2)

4.1. Results of testing with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.4.1A. The corresponding response time of the disk subsystem at the time of testing shown on diagr.4.1B.

4.2.

0

500

1000

1500

2000

2500

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R664 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R664 I/O treads100%Random

R/W=80/20

Page 9: Perf EMC VNX5100 vs IBM DS5300 Eng

4.2. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.4.2A. The results of testing under load with 1/2/4/8 I/O processes on linear recording presented at diagr.4.2B.

4.3.

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R6sequentialRead

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

IBM DS5300 8x 15K 3.5" HDD

R6sequentialWrite

Page 10: Perf EMC VNX5100 vs IBM DS5300 Eng

5. Actually testing the performance of the two storage systems here and would come to its logical conclusion. But ... there gave us a EMC distributor for EMC VNX 5100 a couple of SSD, with a capacity of ~ 100GB each. And in the present storage license for EMC FAST Cache. So on the next chapter of this review. I have already previously in use EFD Storage EMC, but these were STEC SSD production line of ZeusIOPS, based on the SLC-memory and are known not only for its

high productivity, but at least its high cost. They were previously used in EMC and Symmetrix line and in the line of CLARiiON. In EMC VNX same applies enterprise SSD another manufacturer (ie Samsung), also based on the SLC-memory, but offered at a more affordable price (compared to STEC ZeusIOPS). This SSD utilizes SATA connector interface (3Gbps) and is made in the form factor of 3.5 ".

First, examine the speed capabilities of SSD, and then look at how to grow the performance of disk raid group when using FAST Cache.

Page 11: Perf EMC VNX5100 vs IBM DS5300 Eng

0

5000

10000

15000

20000

IOPS

block size

EMC VNX 5100 2x SSD 3.5" HDD R/W=80/20

EMC VNX 5100 2xSSD 3.5" R/W=0/100

EMC VNX 5100 2xSSD 3.5" R/W=100/0

R116 I/O treads100%Random

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 2x 3.5" SSD R/W=80/20

EMC VNX 5100 2x 3.5" SSD R/W=0/100

EMC VNX 5100 2x 3.5" SSD R/W=100/0

R116 I/O treads100%Random

R/W=80/20

5.1 In order to evaluate the performance of SSD, from 2xSSD was created raid group R1. Test results for SSD with a load of 16 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20, R/W = 0/100, R/W = 100/0 are shown in diagr.5.1A. The corresponding response time of the disk subsystem at the time of testing shown on diagr.5.1B.

Page 12: Perf EMC VNX5100 vs IBM DS5300 Eng

5.2. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.5.2A. The results of testing under load with 1/2/4/8 I/O processes on linear recording shown in diagr.5.2B.

0

200

400

600

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 2x 3.5" SSD

R1 (1+1) SSDsequentialRead

0

200

400

600

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 2x 3.5" SSD

R1 (1+1) SSDsequentialWrite

Page 13: Perf EMC VNX5100 vs IBM DS5300 Eng

6. We now turn to testing the Fast Cache - compare results obtained earlier performance on the 5100 EMC VNX 8hHDD without FAST Cache with the results obtained during activation Fast Cache. The effect of the Fast Cache does not appear immediately, it takes time to "hot" data had been cached - this time interval is usually referred to as "warming up" the cache (Cache Warm-Up). This feature concerning caches in general (high volume), not just EMC FAST Cache. I note that the activation of the Fast Cache uses the cache RAM read/write (and so small.) Features associated with activation and deactivation of FAST Cache are shown in the screenshots below

Page 14: Perf EMC VNX5100 vs IBM DS5300 Eng

6.1 Test Results R10 with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.6.1A.

The corresponding response time of the disk subsystem at the time of testing shown on diagr.6.1B

0

5000

10000

15000

20000

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R1064 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R1064 I/O treads100%Random

R/W=80/20

Page 15: Perf EMC VNX5100 vs IBM DS5300 Eng

6.2 Outcome of testing under load R10 with 1/2/4/8 I/O processes in a linear reading presented at diagr.6.2A. Test Results R10 with a load of 1/2/4/8 I/O processes on linear recording shown in diagr.6.2B.

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R10sequentialRead

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R10sequentialWrite

Page 16: Perf EMC VNX5100 vs IBM DS5300 Eng

6.3 Results of testing with a load R5 with 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.6.3A. The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 6.3B.

0

5000

10000

15000

20000

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R564 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

14

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R564 I/O treads100%Random

R/W=80/20

Page 17: Perf EMC VNX5100 vs IBM DS5300 Eng

6.4 Results of testing with a load R5 with 1/2/4/8 I/O processes in a linear reading presented at diagr.6.4A. Results of testing with a load R5 with 1/2/4/8 I/O processes on linear recording shown in diagr.6.4B.

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R5sequentialRead

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R5sequentialWrite

Page 18: Perf EMC VNX5100 vs IBM DS5300 Eng

6.5 Test Results R6 load with 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.6.5A. The corresponding response time of the disk subsystem at the time of testing shown on diagr.6.5B.

0

5000

10000

15000

20000

4K 8K 32K 64K 256K

IOPS

block size

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX 5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R664 I/O treads100%Random

R/W=80/20

0

2

4

6

8

10

12

4K 8K 32K 64K 256K

latensy, ms

block size

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX 5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R664 I/O treads100%Random

R/W=80/20

Page 19: Perf EMC VNX5100 vs IBM DS5300 Eng

6.6 Test Results R6 load with 1/2/4/8 I/O processes in a linear reading presented at diagr.6.6A. Test results R6 load with 1/2/4/8 I/O processes on linear recording shown in diagr.6.6B.

6.7

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX 5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R6sequentialRead

0

100

200

300

400

500

600

700

800

1 2 4 8

MBps

I/O process

EMC VNX 5100 8x 15K 3.5" HDD

EMC VNX 5100 8x 15K 3.5" HDD with FAST Cache (2xSSD R1) after 30min Cache Warm-Up

R6sequentialWrite

Page 20: Perf EMC VNX5100 vs IBM DS5300 Eng

application

- All the tested storage systems were connected to the same LPAR with AIX 6.1 TL6 SP4

- From the host connection used by FC 8Gbps

- We used a JFS2 file system with inline log

- The file system as a working set of files were created, the total size of ~ 1.2TB

- To create a load test was used package nstress, in particular utility ndisk64

- To collect and process information about the response time of storage used by utilities NMON and NMON Analyser respectively

Oleg Korol

[email protected]