Top Banner
Ceph on Intel Intel Storage Components, Benchmarks, and Contributions Daniel Ferber Open Source Software Defined Storage Technologist, Intel Storage Group
30

Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Apr 15, 2017

Download

Technology

Colleen Corrice
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Ceph on IntelIntel Storage Components, Benchmarks,

and ContributionsDaniel Ferber

Open Source Software Defined Storage Technologist,Intel Storage Group

Page 2: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Jim THOmson, intelEnterprise Technologist

[email protected]

Page 3: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

is Changing

Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014

From now until 2020, the size of the digital universe will about double every two years

Information Growth*2XWhat we do with data is changing, traditional storage infrastructure does not solve tomorrow’s problems

Complexity

Shifting of IT services to cloud computing and next-generation platforms

Cloud

Emergence of flash storage and software-defined environments

New Technologies

The World

Page 4: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Explosion

EVERY MINUTE EVERY DAY*

300 HOURSVIDEO UPLOADEDTO YOUTUBE

51,000APPS DOWNLOADED

204MILLION E-MAILS

Source: TechSpartan.co.uk - 2013 vs 2015 In an Internet minute

48 HOURSVIDEO UPLOADEDTO YOUTUBE

47,000APPS DOWNLOADED

200MILLION E-MAILS

2013 2015

Inform a tion

Page 5: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

The Impact ofthe Cloud

Empowerment of the end-user through cloud services

Emergence of new technologies and architectures

Shifting the role of information technology professionals

Page 6: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Price + Performance= AffordableFlashReliability and DurabilityStorage

Page 7: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Storage CompressionEncryptionErasure codingDeduplicationFlash (Non-volatile memory)Data tiering

WorkloadOptimization

Page 8: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

ArchitecturesStorage SCALE-UP

SCALE-OUT

Page 9: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

marketDynamics

Page 10: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

STORAGETRANSFORMINGKEY DRIVERS

CLOUDCLOUD SERVICE PROVIDERS1

STORAGE MEDIATRANSITIONS3 3D XPOINT™

TECHNOLOGY

NEXT GENERATION ARCHITECTURES 2

Page 11: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Where Data is Created

0

10

20

30

40

50

2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020

(ZB)

Unstructured Data Structured Data

Sources: IDC, 2011 Worldwide Enterprise Storage Systems 2011-2015 Forecast updateIDC, The Digital Universe Study In 2020 Forecasts

4.4ZB

44ZB 90%

10% 0%

20%

40%

60%

80%

100%

2012 2013 2014 2015 2016 2017 2018 2019 2020

% of Total Digital Universe

Emerging Markets Mature Markets

Sources: IDC,’s Digital Universe Study, 2014

USA, Canada, Western Europe, Australia, NZ, and Japan

China, India, Mexico, Brazil, and Russia

By 2020, about 90% of all data will beunstructured, driven by Consumer Images,

Voice, and the Web

Emerging Markets will Surpass Mature Markets before 2017 regarding data creation.

Data Creation by Type (ZB)

11

Page 12: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Annual Growth Rate for Data Storage Capacity

10%

25%

15%13%

12%

8%6%

3% 3% 3%

0%2%

0%

5%

10%

15%

20%

25%

30%

1% to 10% annually

11% to 20%

annually

21% to 30%

annually

31% to 40%

annually

41% to 50%

annually

51% to 60%

annually

61% to 70%

annually

71% to 80%

annually

81% to 90%

annually

91% to 100%

annually

More than 100%

annually

Don't know

Source: ESG Research Report: 2015 Data Storage Market Trends

Page 13: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

SSD Use in Servers or External Storage Systems

Source: ESG Research Report: 2015 Data Storage Market Trends

Page 14: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

14

IntelStorage

Page 15: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Processor D

Delivering the performance and advanced intelligence of Intel® Xeon® processors to dense and low power storage designs

Intel® Xeon®

Page 16: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Latency: ~100xSize of Data: ~1,000x 1000x faster than NAND

1000x higher endurance of NAND10x denser than DRAM

Technology claims are based on comparisons of latency, density, and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications.

3D XpOINT™TECHNOLOGY

New Class of Non-Volatile Memory

Page 17: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

17

Intel® Solid State Drives

“The only SSDs that never ever gave me any issues like timeouts, task aborts… are Intel DC S3700s”

From a post on ceph-devel*

Source:   http://ceph.com/r esources/maili ng-­‐‑list-­‐‑irc

Page 18: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

18

Intel’s role in storageAdvance the Industry

Open Source & StandardsBuild an Open Ecosystem

Intel® Storage BuildersEnd user solutions

Cloud, Enterprise

Intel Technology LeadershipStorage Optimized CPU’s

Intel® Xeon® E5v4 2600 PlatformIntel® Xeon® Processor D-1500 Platform

Storage Optimized SoftwareIntel® Intelligent Acceleration Library

Intel® Storage Performance Development Kit

Non-Volatile Memory3D Xpoint™

Intel® Solid State Drives for Datacenter

>7 Cloud storage solutions architectures

70+ partners

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

helping customers to enable Next gen storage

Next gen solutions architectures>26

>10 Enterprise storage solution architectures

Page 19: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Page 20: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

20

Intel Ceph Contribution Timeline

2014 2015 2016

* Right Edge of box indicates approximate release date

New Key/Value Store Backend (rocksdb)

Giant* Hammer Infernalis Jewel

CRUSH Placement Algorithm improvements(straw2 bucket type)

Bluestore Backend Optimizations for NVM

Bluestore SPDKOptimizations

RADOS I/O Hinting(35% better EC Write erformance)

Cache-tiering with SSDs(Write support)

PMStore(NVM-optimized backend

based on libpmem)

RGW, BluestoreCompression, Encryption

(w/ ISA-L, QAT backend)

Virtual Storage Manager (VSM) Open Sourced

CeTuneOpen Sourced

Erasure Coding support with ISA-L

Cache-tiering with SSDs(Read support)

Client-side Block Cache (librbd)

Page 21: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Ceph @ Intel – 2016 Ceph Focus Areas

21

Optimize for Intel® platforms, flash and networking• Compression, Encryption hardware offloads (QAT & SOCs) • PMStore (for 3D XPoint DIMMs)• RBD caching and Cache tiering with NVM• IA optimized storage libraries to reduce latency (ISA-L, SPDK)

Performance profiling, analysis and community contributions• All flash workload profiling and latency analysis• Streaming, Database and Analytics workload driven optimizations

Ceph enterprise usages and hardening• Manageability (Virtual Storage Manager)• Multi Data Center clustering (e.g., async mirroring)

End Customer POCs with focus on broad industry influence• CDN, Cloud DVR, Video Surveillance, Ceph Cloud Services, AnalyticsPOCs

Ready to use IA, Intel NVM optimized systems & solutions from OEMs & ISVs• Ready to use IA, Intel NVM optimized systems & solutions from OEMs & ISVs• Intel system configurations, white papers, case studies • Industry events coverage

Go to Market

Intel® Storage Acceleration Library (Intel® ISA-L)

Intel® Storage Performance Development Kit (SPDK)

Intel® Cache Acceleration Software (Intel® CAS)

Virtual Storage Manager Ce-Tune Ceph Profiler

Page 22: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

22

Ceph Storage Cluster

Hardware Environment Overview

Ceph network (192.168.142.0/24) - 10Gbps

CBT / Zabbix / Monitoring FIO RBD Client

• OSD System Config: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4• Each system with 4x P3700 800GB NVMe, partitioned into 4 OSD’s each, 16 OSD’s total per node

• FIO Client Systems: Intel Xeon E5-2699 v3 2x@ 2.30 GHz, 72 cores w/ HT, 96GB, Cache 46080KB, 128GB DDR4• Ceph v0.94.3 Hammer Release, CentOS 7.1, 3.10-229 Kernel, Linked with JEMalloc 3.6

• CBT used for testing and data acquisition • Single 10GbE network for client & replication data transfer, Replication factor 2

FIO RBD ClientFIO RBD Client

FIO RBD Client

FIO RBD ClientFIO RBD Client

FatTwin (4x dual-socket XeonE5 v3)

FatTwin (4x dual-socket XeonE5 v3)

Ceph OSD

1

NVMe1 NVMe3

NVMe2 NVMe4

Ceph OSD

2

Ceph OSD

3

Ceph OSD

4

Ceph OSD

16

Ceph OSD

1

NVMe1 NVMe3

NVMe2 NVMe4

Ceph OSD

2

Ceph OSD

3

Ceph OSD

4

Ceph OSD

16

Ceph OSD

1

NVMe1 NVMe3

NVMe2 NVMe4

Ceph OSD

2

Ceph OSD

3

Ceph OSD

4

Ceph OSD

16

…Ceph O

SD1

NVMe1 NVMe3

NVMe2 NVMe4

Ceph OSD

2

Ceph OSD

3

Ceph OSD

4

Ceph OSD

16

Ceph OSD

1

NVMe1 NVMe3

NVMe2 NVMe4

Ceph OSD

2

Ceph OSD

3

Ceph OSD

4

Ceph OSD

16

SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U SuperMicro 1028U

Intel Xeon E5 v3 18 Core CPUsIntel P3700 NVMe PCI-e Flash

Easily serviceable NVMe Drives

Page 23: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

23

4K Random Read & Write Performance SummaryFirst Ceph cluster to break 1 Million 4K random IOPS

Software   and  workloads   used  in  performance   tests   may   have   been   optimized  for  performance   only  on  Intel  microprocessors.   Any  difference   in  system  hardware   or  software   design   or  configuration   may   affect   actual   performance.    See   configuration   slides  in  backup  for  details  on  software   configuration   and  test   benchmark  parameters.  

Workload Pattern Max IOPS

4K 100% Random Reads (2TB Dataset)1.35Million

4K 100% Random Reads (4.8TB Dataset)1.15Million

4K 100% Random Writes (4.8TB Dataset)200K

4K 70%/30% Read/Write OLTP Mix (4.8TB Dataset) 452K

Source: Openstack Summit 2015: Accelerating Cassandra workloads on ceph with all flash pcie ssds

Page 24: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Red Hat Ceph Reference Architecture Documents

24

Page 25: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Meta Formula for Ceph Deployments• Have a general understanding of the use cases you want to support with Ceph

• Understand the kind of performance or cost/performance you want to deliver

• Refer to a reference architecture resource to match your use case(s) with known and measured reference architectures:• http://www.redhat.com/en/resources/performance-and-sizing-guide-red-hat-

ceph-storage-qct-servers

• https://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro-INC0270868_v2_0715.pdf

• These documents have Ceph config, tuning and best practices guidance

• Additional help is available from Red Hat, including support and quick start

25

Page 26: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

IntelExperiencethe Storage Revolution

in accelerating next generation storage solutions

Flash technology for high performance

to accelerate software-defined storage

Invest

Embrace

Align

Page 27: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Legal noticesCopyright © 2016 Intel Corporation.

All rights reserved. Intel, the Intel logo, Xeon, Intel Inside, and 3D XPoint are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

FTC Optimization NoticeIntel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804

The cost reduction scenarios described in this document are intended to enable you to get a better understanding of how the purchase of a given Intel product, combined with a number of situation-specific variables, might affect your future cost and savings. Nothing in this document should be interpreted as either a promise of or contract for a given level of costs.

Page 28: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Page 29: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

Backup

Page 30: Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions

30