Top Banner
Page 1 © Hortonworks Inc. 2015 Evolving HDFS to Generalized Storage Subsystem Hortonworks. We do Hadoop.
21

Evolving HDFS to Generalized Storage Subsystem

Jan 07, 2017

Download

Technology

Hadoop Summit
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Evolving HDFS to Generalized Storage Subsystem

Page 1 © Hortonworks Inc. 2015

Evolving HDFS to Generalized Storage Subsystem

Hortonworks. We do Hadoop.

Page 2: Evolving HDFS to Generalized Storage Subsystem

Page 2 © Hortonworks Inc. 2015© Hortonworks Inc. 2013 - Confidential

Hello, my name is Sanjay Radia• Chief Architect, Founder, Hortonworks

• Part of the original Hadoop team at Yahoo! since 2007–Chief Architect of Hadoop Core at Yahoo!– Apache Hadoop PMC and Committer

• Prior– Data center automation, virtualization, Java, HA, OSs, File Systems

– Startup, Sun Microsystems, Inria …– Ph.D., University of Waterloo

Page 2Architecting the Future of Big Data

Page 3: Evolving HDFS to Generalized Storage Subsystem

Page 3 © Hortonworks Inc. 2015

Overview

HDFS – Evolution in past and future and motivations

Scaling HDFS• Where we do well (# of clients/cluster size, raw storage)• Where we have challenges (Small files and blocks)• Solution

• Partial namespace (Briefly)• Block Containers - But we are generalizing the storage layer to support this

Storage Containers to Generalize the Storage Layer

Page 4: Evolving HDFS to Generalized Storage Subsystem

Page 4 © Hortonworks Inc. 2015

Background: HDFS Layering

DN 1 DN 2 DN m.. .. ..

NS1

Foreign NS n

... ...

NS k

Block Management Layer

Block Pool nBlock Pool kBlock Pool 1

NN-1 NN-k NN-n

Common Storage

Blo

ck S

tora

geN

ames

pace

Page 5: Evolving HDFS to Generalized Storage Subsystem

Page 5 © Hortonworks Inc. 2015

Security in virtualized compute env

HDFS Dimensions Large # of compute clients: 100K cores

Reliability Reliability Reliability, Disk/DN FTHA, DR, Snapshots ….PBs of Data (Big Data)

Horizontal Scaling

Bad AppsMulti-tenancy Resource Mgt/Isolation, Audit

Large number of files and blocks

Beyond files: optimized storage

Heterogeneous storage

Erasure codes (In Beta)

Performance

File co-location

Fat DataNodes BRs

TransparentEncryption

Page 6: Evolving HDFS to Generalized Storage Subsystem

Page 6 © Hortonworks Inc. 2015

HDFS Recently…

Rich storage media policies & Tiered storage• Memory, SSD, Archival • Placement policies (E.g. 1 Replica on SSD rest on spinning disks)

• Data migration between tiers (using mover tool)

Storage Efficiency• Archival storage (6x cost reduction)• Erasure Codes (2x cost reduction)

Security - Transparent Encryption

Page 7: Evolving HDFS to Generalized Storage Subsystem

Page 7 © Hortonworks Inc. 2015

HDFS Recently – Operability…• Rolling Upgrades

• Balancer Performance

• Datanode live-ness protocol/channel

• Reduce the number of DN messages to NN

• Improved Block report processing

• Protected directories to avoid data deletion

• Dealing with Bad Apps• NN-Top• Log Tracing (Caller Id)• Fair call queue – currently per-user, soon per-job, …Yarn resource mgt

Page 8: Evolving HDFS to Generalized Storage Subsystem

Page 8 © Hortonworks Inc. 2015

ScalabilityThe Problems and the Solutions

Page 9: Evolving HDFS to Generalized Storage Subsystem

Page 9 © Hortonworks Inc. 2015

Scalability – What HDFS Does Well

• HDFS NN stores all namespace metadata in memory (as per GFS)• Scales to large clusters (5K) since all metadata in memory

– 60K-100K tasks can share the Namenode– Low latency

• Large data if files are large • Proof points of large data and large clusters

– Single Organizations have over 600PB in HDFS– Single clusters with over 200PB using federation– Large clusters over 4K multi-core nodes bombarding a single NN

Metadata in memory the strength of the original GFS and HDFS designBut also its weakness in scaling number of files and blocks

Page 10: Evolving HDFS to Generalized Storage Subsystem

Page 10 © Hortonworks Inc. 2015

Scalability - The ChallengesChallenges• Large number of files (> 350 million)

• NN’s strength has become a limitation

• Number of File operations• Need to improve concurrency move to multiple name servers

HDFS Federation is the current solution• Add NameNodes to scale number of files & operations• Deployed at Twitter

• Cluster with three NameNodes > 5000 node cluster (Plans to grow to 10,000 nodes)

• Back ported and used at Facebook to scale HDFS

Page 11: Evolving HDFS to Generalized Storage Subsystem

Page 11 © Hortonworks Inc. 2015

Scaling Files and Blocks

1. Scale Namespace• Keep only partial namespace in memory - the workingSet

• Of last 3-5 years data only small portion is actively used – the working set metadata fits in memory

- Do not want to page the working set =>still large NN memory to scale to 100K tasks

2. Scale Block Management• Keeping only part of the BlockMap in mem does not work• Soln: Containers of blocks (2GB-16GB+)

• Will reduce BlockMap• Reduce Number of Block/Container reports

But extend DN to support generalized Storage Container

Page 12: Evolving HDFS to Generalized Storage Subsystem

Page 12 © Hortonworks Inc. 2015

Big Picture A Brief Interlude on Partial Namespace + Volumes

Partial Namespace in Memory is not focusnof this talk

Page 13: Evolving HDFS to Generalized Storage Subsystem

Page 13 © Hortonworks Inc. 2015

Partial Namespace - Briefly

• Has been prototyped• Benchmarks so that model works well• Most file systems keep only partial namespace in memory but not at this scale

– Hence Cache replacement policies of working-set is important

• Work in progress to get it into HDFS

• Namespace Volumes – a better way to Federate the Namespace service• Partial Namespace in Memory will allow multiple namespace volumes• Scale both namespace and number of operations using multiple servers• BTW Nameservers can run on DataNodes if you prefer …

Page 14: Evolving HDFS to Generalized Storage Subsystem

Page 14 © Hortonworks Inc. 2015© Hortonworks Inc. 2013 - Confidential

Big Picture on HDFS Namespace + Volumes ..• Only WorkingSet of namespace in memory

› Scale beyond memory of NN• NameServer – Containers for namespaces

› More namespace volumes– Chosen per user/tenant/DBs– Management policies (quota, …)– Mount tables for unified namespace

• Can be managed by a central volume server

• Number of NameServers = › Sum of (Namespace working set) +› Sum of (Namespace throughput)› Move namespace for balancing

› N+K Failover amongst NameServers

14

Datanode Datanode…

NameServers as Containers of Namespaces

Storage Layer

Page 15: Evolving HDFS to Generalized Storage Subsystem

Page 15 © Hortonworks Inc. 2015

Storage Containers: Better HDFS and Beyond

Page 16: Evolving HDFS to Generalized Storage Subsystem

Page 16 © Hortonworks Inc. 2015

DataNodes

Big PictureSupport multiple data layout structures• Indexing• Caching• Use cases

• HDFS Block Container (scale blocks) + Co-location• Object Store Container• Local replica + S3 replica• HBase

Common Shared Infrastructure for• Replication • Consistency • Cluster membership• Object location

Other Container Benefits• Place to put in protocol enhancements• Smaller riskier features

BlockContainer

Object StoreContainer

HBaseContainer

TableContainer

ClusterMembership

ReplicationManagement

ContainerLocation Service

Container Management Services

(Runs on DataNodes)

HBaseOzone

Metadata

Applications

HDFS

Physical Storage - Shared

Page 17: Evolving HDFS to Generalized Storage Subsystem

Page 17 © Hortonworks Inc. 2015

Current vs New World (Storage Containers)Current• Namespace (in NameNode)

• File=BlockIds[]

• BlockManager (In NameNode)• BlockMap: BlockId->locations

• PipeLine repair• Replication management

• BlockData in DataNode• BlockId->Data

• Other • Generation Id (note BlockId=Gen#+Number)

• File/Block Completion coordination

New World• Namespace (in NameNode)

• File=BlockIds[] (but BlockId=ContainerId+LocalBid)

• ContainerManager (logically central)• ContainerMap: ContainerId->locations

• Replication management

• Cluster membership

• Containers (in DataNode)• Container’s BlockMetadata + Data

• BlockId->Data

• PipeLine repair

• Block Completion

• GenerationId equivalent? (Epoc of Raft?)

Page 18: Evolving HDFS to Generalized Storage Subsystem

Page 18 © Hortonworks Inc. 2015© Hortonworks Inc. 2013 - Confidential

Storage Container• Contains data for many blocks with different block ids

Recall how the client will perform the mapping:– file blockId[] (NN)– blockId ->ContainerLocation (Container Manager)– Container maps the blockId to data (DataNode)

• A container can be viewed as a local key-value store.– Block Id is the key and Block data is the value

• Storage Container Prototype using LevelDB – An embeddable key-value store– BlockId is the key and filename of local file is value– Optimizations

– Small blocks (< 1MB) can be stored directly in rocksDB– Other compaction for block data to avoid lots of files

– But this can be evolved over time

Page 19: Evolving HDFS to Generalized Storage Subsystem

Page 19 © Hortonworks Inc. 2015© Hortonworks Inc. 2013 - Confidential

Replication: Possible Approaches• Data pipeline

– Data pipeline as a form of chain replication has been successfully used for data– However, its correctness depended on central coordinator– Needs to be extended for block metadata, but hard to get it right given no central

coordinator

• Use RAFT replication instead of data pipeline, for both data and metadata– Proven to be correct– Has been primarily used for small updates and transactions, fits well for metadata– Could be performance concerns for large streaming writes, needs prototyping

• Hybrid: RAFT + Pipeline– Hybrid approach: It can be viewed as if central coordinator is replaced by RAFT– Data pipeline approach for the data + the raft protocol -- under discussion

Page 20: Evolving HDFS to Generalized Storage Subsystem

Page 20 © Hortonworks Inc. 2015

Next steps• Remove Block management layer’s locking with Namespace

• Reduce lock contention, remove the tight coupling (immediate benefit)

• Allows us to implement a cleanly separated Container Management layer

• Block container (to support tens of billions of blocks)• 2-4gb block containers initially => reduction of 40-80 in BR and block map

• Reduce BR pressure in on NN

• Partial Namespace (to billions of files per volume)• Will take us to 2B files initially and then more as we gain experience on file-working-set management

• Volumes + N+K failover • Scale both ops and namespace + operational improvement for HA

• Other containers• Local Replica & Cloud storage (e.g. S3) replica

• Object store, HBase …..

Page 21: Evolving HDFS to Generalized Storage Subsystem

Page 21 © Hortonworks Inc. 2015

Summary

• HDFS scale proven in real production systems• 4K+ clusters• >200PB in single federated NN cluster and >30PB in non-federated clusters• But very large nunber of small files is a challenge

• Important Area of Current Focus: Scaling # Files and Blocks• Partial Namespace: initially scale to 2B files, later 5-10B files per volume + multiple volumes• Block containers: initially scale to 6B-12B blocks, later to 100B+ blocks

– However we are implementing this to extend the storage layer

• Restructuring storage layer to support generalized storage containers• Support storage needs beyond HDFS: Object Store, better HBase support, etc.