Top Banner
Hadoop 1
84

Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Jul 19, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop

1

Page 2: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

2

Why Hadoop Drivers:

500M+ unique users per month Billions of interesting events per day Data analysis is key

Need massive scalability PB’s of storage, millions of files, 1000’s of nodes

Need cost effectively Use commodity hardware Share resources among multiple projects Provide scale when needed

Need reliable infrastructure Must be able to deal with failures – hardware, software,

networking Failure is expected rather than exceptional

Transparent to applications very expensive to build reliability into each application

The Hadoop infrastructure provides these capabilities

Page 3: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

3

Introduction to Hadoop Apache Hadoop

Open Source – Apache Foundation project Yahoo! is Apache Platinum Sponsor

History Started in 2005 by Doug Cutting Yahoo! became the primary contributor in 2006

We’ve scaled it from 20 node clusters to 4000 node clusters today

We deployed large scale science clusters in 2007 We began running major production jobs in Q1 2008

Portable Written in Java Runs on commodity hardware Linux, Mac OS/X, Windows, and Solaris

Page 4: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

4

Growing Hadoop Ecosystem Hadoop Core

Distributed File System MapReduce Framework

Pig (initiated by Yahoo!)

Parallel Programming Language and Runtime

Hbase (initiated by Powerset)

Table storage for semi-structured data

Zookeeper (initiated by Yahoo!)

Coordinating distributed systems

Hive (initiated by Facebook)

SQL-like query language and metastore

Page 5: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

5

M45 (open cirrus cluster) Collaboration with Major Research

Universities (via open cirrus) Carnegie Mellon University The University of California at Berkeley Cornell University The University of Massachusetts at

Amherst joined

Seed Facility: Datacenter in a Box (DiB) 500 nodes, 4000 cores, 3TB RAM, 1.5PB

disk High bandwidth connection to Internet Located on Yahoo! corporate campus

Runs Hadoop Has been in use for two years

Page 6: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop Community

6

Page 7: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Apache Hadoop Community Hadoop is owned by the Apache Foundation

Provides legal and technical framework for collaboration

All code and IP owned by non-profit foundation Anyone can join Apache’s meritocracy

Users Contributors

write patches Committers

can commit patches Project Management Committee

vote on new committers and releases Representatives from many organizations

Use, contribution, and diversity are growing But we need and want more!

Page 8: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Contributions to Hadoop Each contribution is a patch Divided by subproject

Core (includes HDFS and Map/Red)

Avro, Chukwa, HBase, Hive, Pig, and Zookeeper

2009 Non-Core > Core Core Contributors

185 people (30% from Yahoo!) 72% of patches from Yahoo!

Page 9: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Growing Sub-Project User list traffic is best

indicator of usage. Only Core, Pig, and HBase

have existed > 12 months All sub-projects are growing

Your Company Logo Here

Page 10: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop Architecture

10

Page 11: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Typical Hadoop Cluster (Facebook)

40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack, 8 Gbps out of rack Node specs (Facebook):

8-16 cores, 32 GB RAM, 8×1.5 TB disks, no RAID

Aggregation switch

Rack switch

Page 12: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Typical Hadoop Cluster

Page 13: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Challenges of Cloud Environment Cheap nodes fail, especially when you have many

Mean time between failures for 1 node = 3 years Mean time between failures for 1000 nodes = 1 day

Solution: Build fault tolerance into system

Commodity network implies low bandwidth Solution: Push computation to the data

Programming distributed systems is hard Solution: Restricted programming model: users

write data-parallel “map” and “reduce” functions, system handles work distribution and failures

Page 14: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Distributed File System Single petabyte file system for entire cluster

Managed by a single namenode. Files are written, read, renamed, deleted, but

append-only. Optimized for streaming reads of large files.

Files are broken in to large blocks. Transparent to the client Data is checksumed with CRC32 For reliability, replicated to several datanodes,

Client library talks to both namenode and datanodes Data is not sent through the namenode. Throughput of file system scales nearly linearly.

Access from Java, C, or command line.

Page 15: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop Components

Distributed file system (HDFS) Single namespace for entire cluster Replicates data 3x for fault-tolerance

MapReduce framework Runs jobs submitted by users Manages work distribution & fault-tolerance Colocated with file system

Page 16: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop Distributed File System

Files split into 128MB blocks

Blocks replicated across several datanodes (often 3)

Namenode stores metadata (file names, locations, etc)

Optimized for large files, sequential reads

Files are append-only

Namenode

Datanodes

1 2 3 4

1 2 4

2 1 3

1 4 3

3 2 4

File1

Page 17: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

What is MapReduce?

MapReduce is a programming model for processing large data sets. Programming model for data-intensive computing on

commodity clusters MapReduce is typically used to do distributed

computing on clusters of computers

Pioneered by Google Processes 20 PB of data per day

Popularized by Apache Hadoop project Used by Yahoo!, Facebook, Amazon, …

Page 18: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

What is MapReduce Used For? Google:

Index building for Google Search Article clustering for Google News Statistical machine translation

Yahoo!: Index building for Yahoo! Search Spam detection for Yahoo! Mail

Facebook: Data mining Advertising optimization Spam detection

Page 19: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Example: Facebook Lexicon

www.facebook.com/lexicon

Page 20: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Example: Facebook Lexicon

www.facebook.com/lexicon

Page 21: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

What is MapReduce Used For?

For research: Analyzing Wikipedia conflicts (PARC) Natural language processing (CMU) Climate simulation (Washington) Bioinformatics (Maryland) Particle physics (Nebraska) <Your application here>

Page 22: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

MapReduce Goals

Scalability to large data volumes: Scan 100 TB on 1 node @ 50 MB/s = 24 days Scan on 1000-node cluster = 35 minutes

Cost-efficiency:

Commodity nodes (cheap, but unreliable) Commodity network (low bandwidth) Automatic fault-tolerance (fewer admins) Easy to use (fewer programmers)

Page 23: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

MapReduce Programming Model

Data type: key-value records

Map function: (Kin, Vin) list(Kinter, Vinter)

Reduce function:

(Kinter, list(Vinter)) list(Kout, Vout)

Page 24: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Example: Word Count (Python)

def mapper(line): foreach word in line.split(): output(word, 1) def reducer(key, values): output(key, sum(values))

Page 25: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Word Count Execution

the quick brown

fox

the fox ate the mouse

how now

brown cow

Map

Map

Map

Reduce

Reduce

brown, 2 fox, 2 how, 1 now, 1 the, 3

ate, 1 cow, 1

mouse, 1 quick, 1

the, 1 brown, 1

fox, 1

quick, 1

the, 1 fox, 1 the, 1

how, 1 now, 1

brown, 1 ate, 1

mouse, 1

cow, 1

Input Map Shuffle & Sort Reduce Output

Page 26: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

An Optimization using the Combiner

Local reduce function for repeated keys produced by same map

For associative options like sum, count, max.

Decreases amount of intermediate data Example: local counting for Word Count:

def combiner(key, values): output(key, sum(values))

Page 27: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Word Count with Combiner

the quick brown

fox

the fox ate the mouse

how now

brown cow

Map

Map

Map

Reduce

Reduce

brown, 2 fox, 2 how, 1 now, 1 the, 3

ate, 1 cow, 1

mouse, 1 quick, 1

the, 1 brown, 1

fox, 1

quick, 1

the, 2 fox, 1

how, 1 now, 1

brown, 1 ate, 1

mouse, 1

cow, 1

Input Map Shuffle & Sort Reduce Output

Page 28: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

MapReduce Execution Details

Mappers preferentially scheduled on same node or same rack as their input block Minimize network use to improve performance

Mappers save outputs to local disk before

serving to reducers Allows recovery if a reducer crashes Allows running more reducers than number of

nodes

Page 29: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Fault Tolerance in MapReduce 1. If a task crashes:

Retry on another node Fine for a map because it had no dependencies Fine for reduce because map outputs are on disk

If the same task repeatedly fails, fail the job or ignore that input block

Note: For the fault tolerance to work, user tasks must be deterministic and side-effect-free

Page 30: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Fault Tolerance in MapReduce

2. If a node crashes: Relaunch its current tasks on other nodes Relaunch any maps the node previously ran

Necessary because their output files were lost along with the crashed node

3. If a task is going slowly (straggler): Launch second copy of task on another node Take the output of whichever copy finishes

first, and kill the other one Critical for performance in large clusters (many possible causes of stragglers)

Page 31: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Takeaways

By providing a restricted data-parallel programming model, MapReduce can control job execution in useful ways: Automatic division of job into tasks Placement of computation near data Load balancing Recovery from failures & stragglers

Page 32: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Outline

MapReduce architecture Sample applications Introduction to Hadoop Higher-level query languages: Pig & Hive Current research

Page 33: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

1. Search

Input: (lineNumber, line) records Output: lines matching a given pattern

Map: if(line matches pattern): output(line)

Reduce: identity function

Alternative: no reducer (map-only job)

Page 34: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

2. Sort

Input: (key, value) records Output: same records, sorted by key

Map: identity function Reduce: identify function

Trick: Pick partitioning

function p such that k1 < k2 => p(k1) < p(k2)

pig sheep yak zebra

Aardvark ant bee cow elephant

Map

Map

Map

Reduce

Reduce

ant, bee

zebra

aardvark, elephant

cow

pig

sheep, yak

[A-M]

[N-Z]

Page 35: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

3. Inverted Index

Input: (filename, text) records Output: list of files containing each word

Map: foreach word in text.split(): output(word, filename)

Combine: uniquify filenames for each word

Reduce: def reduce(word, filenames): output(word, sort(filenames))

Page 36: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Inverted Index Example

afraid, (12th.txt) be, (12th.txt, hamlet.txt) greatness, (12th.txt) not, (12th.txt, hamlet.txt) of, (12th.txt) or, (hamlet.txt) to, (hamlet.txt)

to be or not to be

hamlet.txt

be not afraid of greatness

12th.txt

to, hamlet.txt be, hamlet.txt or, hamlet.txt not, hamlet.txt

be, 12th.txt not, 12th.txt afraid, 12th.txt of, 12th.txt greatness, 12th.txt

Page 37: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

4. Most Popular Words

Input: (filename, text) records Output: the 100 words occurring in most files

Two-stage solution:

Job 1: Create inverted index, giving (word, list(file)) records

Job 2: Map each (word, list(file)) to (count, word) Sort these records by count as in sort job

Optimizations:

Map to (word, 1) instead of (word, file) in Job 1 Estimate count distribution in advance by sampling

Page 38: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

5. Numerical Integration

Input: (start, end) records for sub-ranges to integrate Can implement using custom InputFormat

Output: integral of f(x) over entire range

Map: def map(start, end): sum = 0 for(x = start; x < end; x += step): sum += f(x) * step output(“”, sum)

Reduce: def reduce(key, values): output(key, sum(values))

Page 39: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Outline

MapReduce architecture Sample applications Introduction to Hadoop Higher-level query languages: Pig & Hive Current research

Page 40: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Introduction to Hadoop

Download from hadoop.apache.org To install locally, unzip and set JAVA_HOME Docs: hadoop.apache.org/common/docs/current

Three ways to write jobs:

Java API Hadoop Streaming (for Python, Perl, etc) Pipes API (C++)

Page 41: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Word Count in Java public static class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> private final static IntWritable ONE = new IntWritable(1); public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException String line = value.toString(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) output.collect(new Text(itr.nextToken()), ONE);

Page 42: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Word Count in Java

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException int sum = 0; while (values.hasNext()) sum += values.next().get(); output.collect(key, new IntWritable(sum));

Page 43: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Word Count in Java public static void main(String[] args) throws Exception JobConf conf = new JobConf(WordCount.class); conf.setJobName("wordcount"); conf.setMapperClass(MapClass.class); conf.setCombinerClass(Reduce.class); conf.setReducerClass(Reduce.class); FileInputFormat.setInputPaths(conf, args[0]); FileOutputFormat.setOutputPath(conf, new Path(args[1])); conf.setOutputKeyClass(Text.class); // out keys are words (strings) conf.setOutputValueClass(IntWritable.class); // values are counts JobClient.runJob(conf);

Page 44: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Word Count in Python with Hadoop Streaming

import sys for line in sys.stdin: for word in line.split(): print(word.lower() + "\t" + 1)

import sys counts = for line in sys.stdin: word, count = line.split("\t") dict[word] = dict.get(word, 0)

+ int(count) for word, count in counts: print(word.lower() + "\t" + 1)

Mapper.py:

Reducer.py:

Page 45: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Amazon Elastic MapReduce

Web interface and command-line tools for running Hadoop jobs on EC2

Data stored in Amazon S3 Monitors job and shuts machines after

use

Page 46: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Elastic MapReduce UI

Page 47: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Elastic MapReduce UI

Page 48: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Outline

MapReduce architecture Sample applications Introduction to Hadoop Higher-level query languages: Pig & Hive Current research

Page 49: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Motivation

MapReduce is powerful: many algorithms can be expressed as a series of MR jobs

But it’s fairly low-level: must think about keys, values, partitioning, etc.

Can we capture common “job patterns”?

Page 50: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Pig

Started at Yahoo! Research Runs about 50% of Yahoo!’s jobs Features:

Expresses sequences of MapReduce jobs Data model: nested “bags” of items Provides relational (SQL) operators

(JOIN, GROUP BY, etc) Easy to plug in Java functions

Page 51: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

An Example Problem

Suppose you have user data in one file, website data in another, and you need to find the top 5 most visited pages by users aged 18-25.

Load Users Load Pages

Filter by age

Join on name

Group on url

Count clicks

Order by clicks

Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 52: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

In MapReduce

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 53: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Users = load ‘users’ as (name, age); Filtered = filter Users by age >= 18 and age <= 25; Pages = load ‘pages’ as (user, url); Joined = join Filtered by name, Pages by user; Grouped = group Joined by url; Summed = foreach Grouped generate group, count(Joined) as clicks; Sorted = order Summed by clicks desc; Top5 = limit Sorted 5; store Top5 into ‘top5sites’;

In Pig Latin

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 54: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Translation to MapReduce Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Filter by age

Join on name

Group on url

Count clicks

Order by clicks

Take top 5

Users = load … Filtered = filter … Pages = load … Joined = join … Grouped = group … Summed = … count()… Sorted = order … Top5 = limit …

Example from http://wiki.apache.org/pigdata/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 55: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Translation to MapReduce Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Filter by age

Join on name

Group on url

Count clicks

Order by clicks

Take top 5

Users = load … Filtered = filter … Pages = load … Joined = join … Grouped = group … Summed = … count()… Sorted = order … Top5 = limit …

Job 1

Job 2

Job 3

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt

Page 56: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hive

Developed at Facebook Used for most Facebook jobs Relational database built on Hadoop

Maintains table schemas SQL-like query language (which can also

call Hadoop Streaming scripts) Supports table partitioning,

complex data types, sampling, some query optimization

Page 57: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Summary

MapReduce’s data-parallel programming model hides complexity of distribution and fault tolerance

Principal philosophies: Make it scale, so you can throw hardware at problems Make it cheap, saving hardware, programmer and

administration costs (but necessitating fault tolerance)

Hive and Pig further simplify programming

MapReduce is not suitable for all problems, but when it works, it may save you a lot of time

Page 58: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Outline

MapReduce architecture Sample applications Introduction to Hadoop Higher-level query languages: Pig & Hive Current research

Page 59: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Cloud Programming Research

More general execution engines Dryad (Microsoft): general task DAG S4 (Yahoo!): streaming computation Pregel (Google): in-memory iterative graph algs. Spark (Berkeley): general in-memory computing

Language-integrated interfaces

Run computations directly from host language DryadLINQ (MS), FlumeJava (Google), Spark

Page 60: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Spark Motivation

MapReduce simplified “big data” analysis on large, unreliable clusters

But as soon as organizations started using it widely, users wanted more: More complex, multi-stage applications More interactive queries More low-latency online processing

Page 61: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Spark Motivation

Complex jobs, interactive queries and online processing all need one thing that MR lacks:

Efficient primitives for data sharing

Stag

e 1

Stag

e 2

Stag

e 3

Iterative job

Query 1

Query 2

Query 3

Interactive mining

Job

1

Job

2

Stream processing

Page 62: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Spark Motivation

Complex jobs, interactive queries and online processing all need one thing that MR lacks:

Efficient primitives for data sharing

Stag

e 1

Stag

e 2

Stag

e 3

Iterative job

Query 1

Query 2

Query 3

Interactive mining

Job

1

Job

2

Stream processing

Problem: in MR, only way to share data across jobs is stable storage (e.g. file

system) -> slow!

Page 63: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Examples

iter. 1 iter. 2 . . .

Input

HDFS read

HDFS write

HDFS read

HDFS write

Input

query 1

query 2

query 3

result 1

result 2

result 3

. . .

HDFS read

Page 64: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

iter. 1 iter. 2 . . .

Input

Goal: In-Memory Data Sharing

Distributed memory

Input

query 1

query 2

query 3

. . .

one-time processing

10-100× faster than network and disk

Page 65: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Solution: Resilient Distributed Datasets (RDDs)

Partitioned collections of records that can be stored in memory across the cluster

Manipulated through a diverse set of transformations (map, filter, join, etc)

Fault recovery without costly replication Remember the series of transformations that

built an RDD (its lineage) to recompute lost data

Page 66: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Scala programming language

Example: Log Mining

Load error messages from a log into memory, then interactively search for various patterns

lines = spark.textFile(“hdfs://...”)

errors = lines.filter(_.startsWith(“ERROR”))

messages = errors.map(_.split(‘\t’)(2))

messages.cache()

Block 1

Block 2

Block 3

Worker

Worker

Worker

Driver

messages.filter(_.contains(“foo”)).count

messages.filter(_.contains(“bar”)).count

. . .

tasks

results

Cache 1

Cache 2

Cache 3

Base RDD Transformed RDD

Result: full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk

data)

Result: scaled to 1 TB data in 5-7 sec

(vs 170 sec for on-disk data)

Page 67: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Fault Recovery RDDs track lineage information that can be used to efficiently reconstruct lost partitions

Ex:

messages = textFile(...).filter(_.startsWith(“ERROR”)) .map(_.split(‘\t’)(2))

HDFS File Filtered RDD Mapped RDD filter

(func = _.contains(...)) map

(func = _.split(...))

Page 68: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Fault Recovery Results

119

57 56 58 58 81

57 59 57 59

0 20 40 60 80

100 120 140

1 2 3 4 5 6 7 8 9 10

Iter

atri

on ti

me

(s)

Iteration

Failure happens

Page 69: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Example: Logistic Regression

Find best line separating two sets of points

target

random initial line

Page 70: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Logistic Regression Code

val data = spark.textFile(...).map(readPoint).cache() var w = Vector.random(D) for (i <- 1 to ITERATIONS) val gradient = data.map(p => (1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x ).reduce(_ + _) w -= gradient println("Final w: " + w)

Page 71: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Logistic Regression Performance

127 s / iteration

first iteration 174 s

further iterations 6 s

Page 72: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Ongoing Projects

Pregel on Spark (Bagel): graph processing programming model as a 200-line library

Hive on Spark (Shark): SQL engine

Spark Streaming: incremental processing with in-memory state

Page 73: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

If You Want to Try It Out

www.spark-project.org

To run locally, just need Java installed

Easy scripts for launching on Amazon EC2

Can call into any Java library from Scala

Page 74: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Other Resources

Hadoop: http://hadoop.apache.org/common Pig: http://hadoop.apache.org/pig Hive: http://hadoop.apache.org/hive Spark: http://spark-project.org

Hadoop video tutorials: www.cloudera.com/hadoop-training

Amazon Elastic MapReduce: http://aws.amazon.com/elasticmapreduce/

Page 75: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Map/Reduce Map/Reduce is a programming model for efficient

distributed computing It works like a Unix pipeline:

cat input | grep | sort | uniq -c | cat > output

Input | Map | Shuffle & Sort | Reduce | Output

Efficiency from Streaming through data, reducing seeks Pipelining

A good fit for a lot of applications Log processing Web index building Data mining and machine learning

Page 76: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Map/Reduce features Java, C++, and text-based APIs

In Java use Objects and C++ bytes Text-based (streaming) great for scripting or legacy

apps Higher level interfaces: Pig, Hive, Jaql

Automatic re-execution on failure In a large cluster, some nodes are always slow or

flaky Framework re-executes failed tasks

Locality optimizations With large data, bandwidth to data is a problem Map-Reduce queries HDFS for locations of input data Map tasks are scheduled close to the inputs when

possible

Page 77: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop is critical to Yahoo’s business

When you visit yahoo, you are interacting with data processed with Hadoop!

Page 78: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop is critical to Yahoo’s business

Ads Optimization

Content Optimization Search

Index

Content Feed

Processing

When you visit yahoo, you are interacting with data processed with Hadoop!

Page 79: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Hadoop is critical to Yahoo’s business

Ads Optimization

Content Optimization Search

Index

Content Feed

Processing

Machine Learning (e.g. Spam

filters)

When you visit yahoo, you are interacting with data processed with Hadoop!

Page 80: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Tremendous Impact on Productivity

Makes Developers & Scientists more productive Key computations solved in days and not months Projects move from research to production in days Easy to learn, even our rocket scientists use it!

The major factors

You don’t need to find new hardware to experiment You can work with all your data! Production and research based on same framework No need for R&D to do IT (it just works)

Page 81: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

83

Search & Advertising Sciences Hadoop Applications: Search Assist™

Before Hadoop After Hadoop

Time 26 days 20 minutes

Language C++ Python

Development Time 2-3 weeks 2-3 days

Database for Search Assist™ is built using Hadoop. 3 years of log-data 20-steps of map-reduce

Page 82: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Largest Hadoop Clusters in the Universe

25,000+ nodes (~200,000 cores) Clusters of up to 4,000 nodes

4 Tiers of clusters

Development, Testing and QA (~10%) Proof of Concepts and Ad-Hoc work (~10%)

Runs the latest version of Hadoop – currently 0.20

Science and Research (~60%) Runs more stable versions

Production (~20%) Currently Hadoop 0.18.3

Page 83: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Large Hadoop-Based Applications

2008 2009 Webmap ~70 hours runtime

~300 TB shuffling ~200 TB output 1480 nodes

~73 hours runtime ~490 TB shuffling ~280 TB output 2500 nodes

Sort benchmarks (Jim Gray contest)

1 Terabyte sorted •209 seconds •900 nodes

1 Terabyte sorted •62 seconds, 1500 nodes 1 Petabyte sorted •16.25 hours, 3700 nodes

Largest cluster 2000 nodes •6PB raw disk •16TB of RAM •16K CPUs

4000 nodes •16PB raw disk •64TB of RAM •32K CPUs •(40% faster CPUs too)

Page 84: Wireless and Mobile System Infrastructurewccclab.cs.nchu.edu.tw/.../hadoop.pdf · Typical Hadoop Cluster (Facebook) 40 nodes/rack, 1000-4000 nodes in cluster 1 Gbps bandwidth in rack,

Q&A

For more information: http://hadoop.apache.org/ http://developer.yahoo.com/hadoop/

Who uses Hadoop?:

http://wiki.apache.org/hadoop/PoweredBy