Top Banner
1 Big-data Analytics: Need to look beyond Hadoop? Dr. Vijay Srinivas Agneeswaran, Director and Head, Big-data R&D, Innovation Labs, Impetus.s
17

Big data analytics_beyond_hadoop_public_18_july_2013

Jan 26, 2015

Download

Technology

This was the deck I used for the Hadoop Meetup talk at Bangalore on 18th of July 2013. The talk was titled "Big-data Analytics: Need to Look Beyond Hadoop?"
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Big data analytics_beyond_hadoop_public_18_july_2013

1

Big-data Analytics: Need to look beyond Hadoop?

Dr. Vijay Srinivas Agneeswaran, Director and Head, Big-data R&D,

Innovation Labs, Impetus.s

Page 2: Big data analytics_beyond_hadoop_public_18_july_2013

•Introduction to Berkeley data analytics stack – Spark

•Machine learning: 3 generations

•Iterative Machine Learning (ML) algorithms – Logistic regression.

•Code snippets

•Performance comparison with Hadoop.

•Real-time analytics with Twitter’s Storm

•Internet traffic use case – ML over Storm.

•Performance comparison of Mahout with R/ML over Storm

•Hadoop Suitability for certain types of analytic problems

Contents

2

Page 3: Big data analytics_beyond_hadoop_public_18_july_2013

3

ML realizations: 3 Generational view

Page 4: Big data analytics_beyond_hadoop_public_18_july_2013

Iterative ML Algorithms What are iterative algorithms?

Those that need communication among the computing entities

Examples – neural networks, PageRank algorithms, network traffic analysis

Conjugate gradient descent

Commonly used to solve systems of linear equations

[CB09] tried implementing CG on dense matrices

DAXPY – Multiplies vector x by constant a and adds y.

DDOT – Dot product of 2 vectors

MatVec – Multiply matrix by vector, produce a vector.

1 MR per primitive – 6 MRs per CG iteration, hundreds of MRs per CG computation, leading to 10 of GBs of communication even for small matrices.

Other iterative algorithms – fast fourier transform, block tridiagonal[CB09] C. Bunch, B. Drawert, M. Norman, Mapscale: a cloud environment for scientific

computing, Technical Report, University of California, Computer Science Department, 2009.

Page 5: Big data analytics_beyond_hadoop_public_18_july_2013

5

Berkeley Big-data Analytics Stack

Hadoop Distributed File SystemTachyon: Distributed In-memory File

System

Spark: Computing Paradigm

Bagel/GraphX: Graph Processing

•Mesos – similar to Nimbus used by Storm, but more sophisticated.

•Tachyon: DFS – could be replaced by HDFS.

•Spark – built as a computing paradigm over resilient distributed data sets.

•Shark – comparable to Impala

Shark: SQL Abstraction

Spark Streaming

Mesos: Cluster Management

Page 6: Big data analytics_beyond_hadoop_public_18_july_2013

Spark: Third Generation ML Realization Resilient distributed data sets (RDDs)

Read-only collection of objects partitioned across a cluster

Can be rebuilt if partition is lost.

Operations on RDDs

Transformations – map, flatMap, reduceByKey, sort, join, partitionBy

Actions – Foreach, reduce, collect, count, lookup

Programmer can build RDDs from

1.a file in HDFS

2.Parallelizing Scala collection - divide into slices.

3.Transform existing RDD - Specify operations such as Map, Filter

4.Change persistence of RDD Cache or a save action – saves to HDFS.

Shared variables

Broadcast variables, accumulators[MZ10] Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. 2010. Spark: cluster computing with working sets. In Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (HotCloud'10). USENIX Association, Berkeley, CA, USA, 10-10

Page 7: Big data analytics_beyond_hadoop_public_18_july_2013

7

Data Flow in Spark and Hadoop

Page 8: Big data analytics_beyond_hadoop_public_18_july_2013

Some Spark(ling) examplesScala code (serial)

var count = 0

for (i <- 1 to 100000)

{ val x = Math.random * 2 - 1

val y = Math.random * 2 - 1

if (x*x + y*y < 1) count += 1 }

println("Pi is roughly " + 4 * count / 100000.0)

Sample random point on unit circle – count how many are inside them (roughly about PI/4). Hence, u get approximate value for PI.

Based on the PS/PC = AS/AC=4/PI, so PI = 4 * (PC/PS).

Page 9: Big data analytics_beyond_hadoop_public_18_july_2013

Some Spark(ling) examplesSpark code (parallel)

val spark = new SparkContext(<Mesos master>)

var count = spark.accumulator(0)

for (i <- spark.parallelize(1 to 100000, 12))

{ val x = Math.random * 2 – 1

val y = Math.random * 2 - 1

if (x*x + y*y < 1) count += 1 }

println("Pi is roughly " + 4 * count / 100000.0)

Notable points:

1. Spark context created – talks to Mesos1 master.

2. Count becomes shared variable – accumulator.

3. For loop is an RDD – breaks scala range object (1 to 100000) into 12 slices.

4. Parallelize method invokes foreach method of RDD.

1 Mesos is an Apache incubated clustering system – http://mesosproject.org

Page 10: Big data analytics_beyond_hadoop_public_18_july_2013

Logistic Regression in Spark: Serial Code// Read data file and convert it into Point objects

val lines = scala.io.Source.fromFile("data.txt").getLines()

val points = lines.map(x => parsePoint(x))

// Run logistic regression

var w = Vector.random(D)

for (i <- 1 to ITERATIONS) {

val gradient = Vector.zeros(D)

for (p <- points) {

val scale = (1/(1+Math.exp(-p.y*(w dot p.x)))-1)*p.y

gradient += scale * p.x

}

w -= gradient

}

println("Result: " + w)

Page 11: Big data analytics_beyond_hadoop_public_18_july_2013

Logistic Regression in Spark// Read data file and transform it into Point objectsval spark = new SparkContext(<Mesos master>)val lines = spark.hdfsTextFile("hdfs://.../data.txt")val points = lines.map(x => parsePoint(x)).cache()

// Run logistic regressionvar w = Vector.random(D)for (i <- 1 to ITERATIONS) { val gradient = spark.accumulator(Vector.zeros(D)) for (p <- points) { val scale = (1/(1+Math.exp(-p.y*(w dot p.x)))-1)*p.y gradient += scale * p.x } w -= gradient.value}println("Result: " + w)

Page 12: Big data analytics_beyond_hadoop_public_18_july_2013

12

Logistic Regression: Spark VS Hadoop

http://spark-project.org

Page 13: Big data analytics_beyond_hadoop_public_18_july_2013

Instance of Architecture for Internet Traffic Analysis Use Case

Page 14: Big data analytics_beyond_hadoop_public_18_july_2013

K-means Clustering Algorithm: Mahout VS ML Over Storm

14

Page 15: Big data analytics_beyond_hadoop_public_18_july_2013

Spark Use Cases

15

•Ooyala

• Uses Cassandra for video data personalization.

• Pre-compute aggregates VS on-the-fly queries.

• Moved to Spark for ML and computing views.

• Moved to Shark for on-the-fly queries – C* OLAP aggregate queries on Cassandra 130 secs, 60 ms in Spark

•Conviva

• Uses Hive for repeatedly running ad-hoc queries on video data.

• Optimized ad-hoc queries using Spark RDDs – found Spark is 30 times faster than Hive

• ML for connection analysis and video streaming optimization.

•Quantifind

• Movie , video game companies can predict success of new releases

• Moved from Hadoop to Spark and able to run ML in seconds, instead of hours.

Page 16: Big data analytics_beyond_hadoop_public_18_july_2013

16

Hadoop (un)Suitability: DiscussionIterative ML algorithms – Spark, Giraph

Logistic regression, Kernel SVMs, Conjugate gradient descent, collaborative filtering, Gibbs

sampling, Alternating least squares.Interactive/On-the-fly data processing – Storm.

OLAP – data cube operations. Dremel/DrillData sets – not embarrassingly parallel?

Graph processingGraphLab, Pregel

Page 17: Big data analytics_beyond_hadoop_public_18_july_2013

Thank You!

[email protected]

• LinkedIn http://

in.linkedin.com/in/vijaysrinivasagneeswaran• Blogs

blogs.impetus.com

• Twitter @a_vijaysrinivas.