Top Banner
Play2SDG: Bridging the Gap between Serving and Analytics in Scalable Web Applications Panagiotis Garefalakis M.Res Thesis Presentation, 7 September 2015
21

Mres presentation

Apr 12, 2017

Download

Technology

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Mres presentation

Play2SDG: Bridging the Gap between Serving and Analytics in Scalable Web Applications

Panagiotis Garefalakis M.Res Thesis Presentation, 7 September 2015

Page 2: Mres presentation

Outline•Motivation •Challenges

➡Scalable web app design

➡Resource efficiency

➡Resource Isolation

•In-memory Web Objects model ➡Play2SDG case study

➡Experimental Results

•Conclusions •Future work

2

Page 3: Mres presentation

Motivation

• Most modern web and mobile applications today offer highly personalised services generating large amounts of data

• Tasks separated into offline (BE) and online (LC) based on the latency, computation and data freshness requirements

• To train models and offer analytics, they use asynchronous offline computation, which leads to stale data being served to clients

• To serve requests robustly and with low latency, applications cache data from the analytics layer

• Applications deployed in large clusters, but with no collocation of tasks to avoid SLO violations

• No data freshness guarantees and poor resource efficiency

3

Page 4: Mres presentation

4

Typical Web App

Database ServerLoad Balancer

Database

Web Server

Presentation

Web Application

Business

Data Model

HTTP Request HTTP Response

Data Intensive Batch Processing

Data

Trained ModelsDashed Line: Offline Task

Cluster A

Clus

ter B

• How does a typical scalable web application look like?

• There is a strict decoupling of online and offline tasks

• With the emerge of cloud computing, these applications are deployed on clusters with thousands of machines

Page 5: Mres presentation

Challenges: Resource Efficiency

5Delimitrou, Christina, and Christos Kozyrakis. "Quasar: Resource-efficient and qos-aware cluster management. ASPLOS 2014

• Most cloud facilities operate at very low utilisation, hurting both cost effectiveness and future scalability

• Figure depicts a utilisation analysis for a production cluster at Twitter with thousands of servers, managed by Mesos over one month. The cluster mostly hosts user-centric services

• The aggregate CPU utilisation is consistently below 20%, even though reservations reach up to 80% of total capacity

Page 6: Mres presentation

Challenges: Resource Efficiency

6Delimitrou, Christina, and Christos Kozyrakis. "Quasar: Resource-efficient and qos-aware cluster management. ASPLOS 2014

• Even when looking at individual servers, their majority does not exceed 50% utilisation on any week

• Typical memory use is higher (40-50%) but still differs from the reserved capacity

Page 7: Mres presentation

Challenges: Resource Isolation

7Heracles: Improving Resource Efficiency at Scale," David Lo, Liqun Cheng, Rama Govindaraju, Parthasarathy Ranganathan, and Christos Kozyrakis. ISCA 2015.

When a number of workloads execute concurrently on a server, they com- pete for shared resources.

• Shared cluster environments suffer from resource interference. The main resources that are affected are CPU, caches (LLC), memory (DRAM), and network. There are also non-obvious interactions between resources, known as cross-resource interactions

• What about resource isolation mechanisms provided by the Operating System - through scheduling?

• Even at low load, colocating LC with BE tasks creates sufficient pressure on the shared resources to lead to SLO violations. There are differences depending on the LC sensitivity on shared resources

• The values are latencies, normalised to the SLO latency

Page 8: Mres presentation

8

In-memory Web Objects Model

Cluster A

Load Balancer Web Server

Presentation

Web Application

Data Model

IWOs API

HTTP Request

Stateful Stream Processing (SDGs)

serving dataflow

o1 o2src snk

IWO

o2

o3

o4 snk

analytics dataflow

o1src

HTTP Response

state

Stateful Stream Processing (SDGs)

serving dataflow

o1 o2src snk

IWO

o2

o3

o4 snk

analytics dataflow

o1src

HTTP Response

state

Scheduler

State

SLOSLOSLOSLO

Queues

WorkerThreads

Cluster A

• IWOs, express both online and offline logic of a web application as a single stateful distributed dataflow graph (SDG)

• State of the dataflow computation is expressed as IWOs, which are accessible as persistent objects by the application

• What about application strict SLOs - resource isolation and efficiency?

Tasks can be cooperatively scheduled, allowing to move resources between tasks of the dataflow efficiently according to the web application needs. As a result, the application can exploit data-parallel processing for compute-intensive requests and also maintain high resource utilisation, e.g. when training complex models, leading to fresher data while serving results with low latency from IWOs.

Page 9: Mres presentation

9

Front-End (Web)

4.

5.

7.

8.

Data Transport Layer Back-End

Relational Data

Cassandra

Non Relational Data

userItem

Cooccurrence

CassandraScheduler

Java App

Batch ProccesingHigh-LatencyHigh-Throughput

6.

Mesos/Spark Cluster

index(user, password){ if(! User.authenticate(user, pass)) return "Invalid credentials” }

view(user){ //Constructing Recommendation userRow = userItem.getRow(user) coOcc.multiply(userRow)}

rate(user, item, rating){ //Pushing new rating in the queue Queue.publish(user, item, rating)}

Play Framework async fetch ratings

synch authenticate

1.

get recommendations

2.

3.

async add new rating

read userItem data

write CoOccurence data

update data

4.async fetch data

sync

ORM interface

Queue interface

Key-value interface

#

#

Synchronous Task

Asynchronous Task

batch processing for analytics data

Play2SDG: Typical Web Music App• Implemented a typical scalable web music service using Play Framework for Java

• Decoupled online and offline tasks to lower response latency

• Asynchronous collaborative filtering (CF) task using Apache Spark and Mesos for deployment

Page 10: Mres presentation

• Implemented a scalable web music service using IWOs API and making minor changes in the application code

• Express both online and offline logic of a web application as a stateful distributed dataflow graph

• Online collaborative filtering implementation using SDGs. addRating must achieve high throughput; getRec must serve requests with low latency, when recommendations are included in dynamically generated web pages

10

Front-End (Web)

view(user){ //Access Dataflow live state DataSource ds = DB.getDatasource() userRow = db.get(userItem).getRow(user) coOcc.multiply(userRow)}rate(user, item, rating){ //Write directly to dataflow state DataSource ds = DB.getDatasource() ds.updateUserItem(user, item, rating) ds.updateCoOc(UserItem) return OK;}

index(user, password){ if(! User.authenticate(user, pass)) return "Invalid credentials” }

Back-End

Play Framework

SDG Distributed Processing System

write datasource

read datasource

Data Store

Cassandra

userItemCoOccurrence

Transparent State

low latency interface

In-Memory Web Object

(IWO)

analytics dataflow

serving dataflow

authenticate user

JPA interface

IWO interface

fetch data

Play2SDG: IWOs Web Music App

updateUserItemnew rating

recrequest

coOcc

recresult

State Element

(SE)

dataflow

TaskElement

(TE)getUserVec

updateCoOcc

userItem

getRecVec

Page 11: Mres presentation

11

Evaluation Platform• Wombat’s private cluster with 5 machines

• Machines with 8 CPUs, 8 GB RAM and 1TB locally mounted disk, 1Gbps network

• Data: Million song Dataset, 943,347 unique tracks with 8,598,630 tag pairs

• Software: - Apache Spark 1.1.0

- Apache Mesos is 0.22.1 (1 master node 3 slaves)

- Nginx is 1.1.19

- Cassandra database is 2.0.1

• Load generator is Apache JMeter 2.13 producing a specific functional behaviour pattern:

1. user login

2. navigate through the home page displaying the top 100 tracks

3. visit the page with the latest recommendations

4. user logout

Page 12: Mres presentation

12

Systems Compared

• Isolated Play2SDG • Play framework, Cassandra and Spark are configured to use up to 2 cores and 2GB of

memory each through the Mesos API

• Spark is set up in cluster mode and was not allowed to be colocated with Play application

• Colocated Play2SDG • Play framework, Cassandra and Spark are configured to use up to 2 cores and 2GB of

memory each through the Mesos API

• Spark is set up in cluster mode and was allowed to be colocated with Play application

• Play2SDG IWOs implementation • both serving and analytics tasks implemented as an SDG

• configure the application JVM to use the same resources as above using JVM configuration and cgroups - disabled scheduling

Page 13: Mres presentation

13

Play2SDG Case Study Results

0 200 400 600 800

1000 1200 1400

5 10 20 40 80 100 120 140 180 200 225 250 300Aver

age

Thro

ughp

ut (T

PS)

Number of clients

0

100

200

300

400

500

Resp

once

tim

e av

erag

e (m

s)

Isolated Play component with CassandraCollocated Spark-Play components with Cassandra

IWOs Serving and Analytics with Cassandra

5 20 40 120 180 225 300 0.01

0.1

1

10

Resp

once

tim

e pe

rcen

tiles

(s)

Number of clients

Isolated 75th percentileColocated 75th percentile

IWOs 75th percentile

0 200 400 600 800

1000 1200 1400

5 10 20 40 80 100 120 140 180 200 225 250 300Aver

age

Thro

ughp

ut (T

PS)

Number of clients

0

100

200

300

400

500

Resp

once

tim

e av

erag

e (m

s)

Isolated Play component with CassandraCollocated Spark-Play components with Cassandra

IWOs Serving and Analytics with Cassandra

5 20 40 120 180 225 300 0.01

0.1

1

10

Resp

once

tim

e pe

rcen

tiles

(s)

Number of clients

Isolated 75th percentileColocated 75th percentile

IWOs 75th percentile

0 200 400 600 800

1000 1200 1400

5 10 20 40 80 100 120 140 180 200 225 250 300Aver

age

Thro

ughp

ut (T

PS)

Number of clients

0

100

200

300

400

500

Resp

once

tim

e av

erag

e (m

s)

Isolated Play component with CassandraCollocated Spark-Play components with Cassandra

IWOs Serving and Analytics with Cassandra

5 20 40 120 180 225 300 0.01

0.1

1

10

Resp

once

tim

e pe

rcen

tiles

(s)

Number of clients

Isolated 90th percentileColocated 90th percentile

IWOs 90th percentile

0 200 400 600 800

1000 1200 1400

5 10 20 40 80 100 120 140 180 200 225 250 300Aver

age

Thro

ughp

ut (T

PS)

Number of clients

0

100

200

300

400

500

Resp

once

tim

e av

erag

e (m

s)

Isolated Play component with CassandraCollocated Spark-Play components with Cassandra

IWOs Serving and Analytics with Cassandra

5 20 40 120 180 225 300 0.01

0.1

1

10

Resp

once

tim

e pe

rcen

tiles

(s)

Number of clients

Isolated 99th percentileColocated 99th percentile

IWOs 99th percentile

Page 14: Mres presentation

14

Scheduling Results

0

10

20

30

40

50

0 20 40 60 80 100 120

Resp

onse

late

ncy

in m

s

Time in seconds

Scheduled serving IWOs

Page 15: Mres presentation

Thesis Contributions

• Introduced In-memory Web Objects (IWOs), offering a unified model to developers when writing web applications that have the ability to serve data while using big data analytics

• IWOs isolation mechanism that is based on cooperative task scheduling. Co-operative task scheduling reduces the scheduling decisions and allocates resources in a fine-grained way, leading to improved resource utilisation

• The evaluation of IWOs by implementing Play2SDG, a real web application similar to Spotify, with both online/LC and offline/BE tasks. The web application was implemented as an extension of Play framework

15

Page 16: Mres presentation

Future work

• Focus on efficient distributed scheduling of BE analytics and LC serving tasks

• Further investigate the automatic conversion of a web application in an SDG

• Implement IWOs abstract programming model for other stateful stream processing frameworks like Flink

• More Evaluation!

16

Page 17: Mres presentation

Thank you!

Questions???

17

email: [email protected]

Page 18: Mres presentation

Demo time!

18

Page 19: Mres presentation

19

Backup Slide Isolation

0

10

20

0 1 2 3 4 5 6 7 8 9

Tim

e in

sec

onds

Number of instances

Mesos deployment timeApplication ramp up time

0

10

20

30

40

50

60

70

80

90

100

110

120

0 1 2 3 4 5 6 7 8 9

Tim

e in

sec

onds

Number of instances

LXC container snapshot timeLXC container clone time

Page 20: Mres presentation

20

Backup Slide 2

Page 21: Mres presentation

21

Backup Slide 3

A18-v1

XYZ18-v2

cf2:col2-XYZ

B18-v3 foobar18-v1

row-6

cf1:col-B cf2:foobar

row-5

Foo18-v1

cf2:col-Foo

row-2

row-7

row-1

cf1:col-A

row-10

row-18 A18 - v1

Column Family 1 Column Family 2

Coordinates for a Cell: Row Key Column Family Name Column Qualifier Version

B18 - v3

Peter - v2

Bob - v1

Foo18 - v1

XYZ18 - v2

Mary - v1

foobar18 - v1

CF Prefix

Description

playServ

playCF

Stat Desc.

TitleZ

Map<k,v>

id

sparkCF Map<k,v>

Stat Desc.

Stat Desc.

Map<k,v>

StatsMap

timeX

TimeSt.

timeY

Cluster KeyRow Key Column Family

Description

Stat Desc.

TitleZ

Map<k,v>

Map<k,v>

Stat Desc.

Stat Desc.

Map<k,v>

StatsMap

timeX

TimeSt.

timeY

Cluster Key Column Family

releaseDate

0axfdsa

0axfdsb

DateTime

TitleZ

name

id

0axfdsg name

DateTime

DateTime

name

Artist

TitleX

Title

TitleY

Static Column FamiliesRow Key

Tag

tag1

tag1

tag2

Tag

Dynamic Column Family