Top Banner
©2015, Amazon Web Services, Inc. or its affiliates. All rights reserved ©2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Building Your Data Warehouse with Amazon Redshift Ian Meyers, AWS ([email protected]) Guest Speaker: Toby Moore, Co-Founder & CTO, Space Ape Games ([email protected])
44
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Building Your Data Warehouse with Amazon Redshift

©2015,  Amazon  Web  Services,  Inc.  or  its  affiliates.  All  rights  reserved ©2015,  Amazon  Web  Services,  Inc.  or  its  affiliates.  All  rights  reserved

Building Your Data Warehouse with Amazon Redshift

Ian Meyers, AWS ([email protected])

Guest Speaker: Toby Moore, Co-Founder & CTO, Space Ape Games ([email protected])

Page 2: Building Your Data Warehouse with Amazon Redshift

Data Warehouse - Challenges

Cost

Complexity

Performance

Rigidity

1990   2000   2010   2020  

Enterprise  Data   Data  in  Warehouse  

Page 3: Building Your Data Warehouse with Amazon Redshift

Petabyte scale; massively parallel

Relational data warehouse

Fully managed; zero admin

SSD & HDD platforms

As low as $1,000/TB/Year

Amazon Redshift

Page 4: Building Your Data Warehouse with Amazon Redshift

Redshift powers Clickstream Analytics for Amazon.com

Web log analysis for Amazon.com Over one petabyte workload Largest table: 400TB 2TB of data per day

Understand customer behavior Who is browsing but not buying Which products / features are winners What sequence led to higher customer conversion

Solution Best scale out solution – Query across 1 week Hadoop – query across 1 month

Page 5: Building Your Data Warehouse with Amazon Redshift

Redshift Performance Realized

Performance Scan 15 months of data: 14 minutes

2.25 trillion rows Load one day worth of data: 10 minutes

5 billion rows Backfill one month of data: 9.75 hours

150 billion rows Pig à Amazon Redshift: 2 days to 1 hr

10B row join with 700M rows Oracle à Amazon Redshift: 90 hours to 8 hrs

Reduced number of SQLs by a factor of 3

Cost 2PB cluster

100 node dw1.8xl (3yr RI) $180/hr

Complexity

20% time of one DBA Backup Restore Resizing

Page 6: Building Your Data Warehouse with Amazon Redshift

0   4   8   12   16   20   24   28   32  

128    Nodes  

16    Nodes  

 2    Nodes  

Dura%on  (Minutes)  

Time  to  Deploy  and  Manage  a  Cluster  

Time  spent    on  clicks  

Deploy     Connect   Backup   Restore   Resize    (2  to  16  nodes)  

Simplicity

Page 7: Building Your Data Warehouse with Amazon Redshift

Who uses Amazon Redshift?

Page 8: Building Your Data Warehouse with Amazon Redshift

Common Customer Use Cases

•  Reduce costs by extending DW rather than adding HW

•  Migrate completely from existing DW systems

•  Respond faster to business

•  Improve performance by an order of magnitude

•  Make more data available for analysis

•  Access business data via standard reporting tools

•  Add analytic functionality to applications

•  Scale DW capacity as demand grows

•  Reduce HW & SW costs by an order of magnitude

Traditional Enterprise DW Companies with Big Data SaaS Companies

Page 9: Building Your Data Warehouse with Amazon Redshift

Selected Amazon Redshift Customers

Page 10: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift Partners

Page 11: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift Architecture •  Leader Node

–  SQL endpoint –  Stores metadata –  Coordinates query execution

•  Compute Nodes –  Local, columnar storage –  Execute queries in parallel –  Load, backup, restore via

Amazon S3; load from Amazon DynamoDB or SSH

•  Two hardware platforms

–  Optimized for data processing –  DW1: HDD; scale from 2TB to 1.6PB –  DW2: SSD; scale from 160GB to 256TB

10 GigE (HPC)

Ingestion Backup Restore

SQL Clients/BI Tools

128GB RAM

16TB disk

16 cores

Amazon S3 / DynamoDB / SSH

JDBC/ODBC

128GB RAM

16TB disk

16 cores Compute Node

128GB RAM

16TB disk

16 cores Compute Node

128GB RAM

16TB disk

16 cores Compute Node

Leader Node

Page 12: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift dramatically reduces I/O

•  Data compression

•  Zone maps

•  Direct-attached storage

•  Large data block sizes

ID   Age   State   Amount  

123   20   CA   500  

345   25   WA   250  

678   40   FL   125  

957   37   WA   375  

Page 13: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift dramatically reduces I/O

•  Data compression

•  Zone maps

•  Direct-attached storage

•  Large data block sizes

ID   Age   State   Amount  

123   20   CA   500  

345   25   WA   250  

678   40   FL   125  

957   37   WA   375  

Page 14: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift dramatically reduces I/O

•  Column storage

•  Data compression

•  Zone maps

•  Direct-attached storage

•  Large data block sizes

analyze compression listing; Table | Column | Encoding ---------+----------------+---------- listing | listid | delta listing | sellerid | delta32k listing | eventid | delta32k listing | dateid | bytedict listing | numtickets | bytedict listing | priceperticket | delta32k listing | totalprice | mostly32 listing | listtime | raw

Page 15: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift dramatically reduces I/O

•  Column storage

•  Data compression

•  Direct-attached storage

•  Large data block sizes

•  Track of the minimum and maximum value for each block

•  Skip over blocks that don’t contain the data needed for a given query

•  Minimize unnecessary I/O

Page 16: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift dramatically reduces I/O

•  Column storage

•  Data compression

•  Zone maps

•  Direct-attached storage

•  Large data block sizes

•  Use direct-attached storage to maximize throughput

•  Hardware optimized for high performance data processing

•  Large block sizes to make the most of each read

•  Amazon Redshift manages durability for you

Page 17: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift has security built-in

•  SSL to secure data in transit

•  Encryption to secure data at rest –  AES-256; hardware accelerated –  All blocks on disks and in Amazon S3 encrypted –  HSM Support

•  No direct access to compute nodes

•  Audit logging & AWS CloudTrail integration

•  Amazon VPC support

•  SOC 1/2/3, PCI-DSS Level 1, FedRAMP, others

10 GigE (HPC)

Ingestion Backup Restore

SQL Clients/BI Tools

128GB RAM

16TB disk

16 cores

128GB RAM

16TB disk

16 cores

128GB RAM

16TB disk

16 cores

128GB RAM

16TB disk

16 cores

Amazon S3 / Amazon DynamoDB

Customer VPC

Internal VPC

JDBC/ODBC

Leader Node

Compute Node

Compute Node

Compute Node

Page 18: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift is 1/10th the Price of a Traditional Data Warehouse

DW1 (HDD) Price Per Hour for DW1.XL Single Node

Effective Annual Price per TB

On-Demand $ 0.850 $ 3,723

1 Year Reserved Instance $ 0.215 $ 2,192

3 Year Reserved Instance $ 0.114 $ 999

DW2 (SSD) Price Per Hour for DW2.L Single Node

Effective Annual Price per TB

On-Demand $ 0.250 $ 13,688

1 Year Reserved Instance $ 0.075 $ 8,794

3 Year Reserved Instance $ 0.050 $ 5,498

Page 19: Building Your Data Warehouse with Amazon Redshift

Expanding Amazon Redshift’s Functionality

Page 20: Building Your Data Warehouse with Amazon Redshift

New Dense Storage Instance DS2, based on EC2’s D2, has twice the memory and CPU as DW1

Migrate from DS1 to DS2 by restoring from snapshot. We will help you migrate your RIs

•  Twice the memory and compute power of DW1

•  Enhanced networking and 1.5X gain in disk throughput

•  40% to 60% performance gain over DW1

•  Available in the two node types: XL (2TB) and 8XL (16TB)

Page 21: Building Your Data Warehouse with Amazon Redshift

Custom ODBC and JDBC Drivers

•  Up to 35% higher performance than open source drivers

•  Supported by Informatica, Microstrategy, Pentaho, Qlik, SAS, Tableau

•  Will continue to support PostgreSQL open source drivers

•  Download drivers from console

Page 22: Building Your Data Warehouse with Amazon Redshift

Explain Plan Visualization

Page 23: Building Your Data Warehouse with Amazon Redshift

User Defined Functions

•  We’re enabling User Defined Functions (UDFs) so you can add your own

–  Scalar and Aggregate Functions supported

•  You’ll be able to write UDFs using Python 2.7 –  Syntax is largely identical to PostgreSQL UDF Syntax –  System and network calls within UDFs are prohibited

•  Comes with Pandas, NumPy, and SciPy pre-

installed –  You’ll also be able import your own libraries for even more

flexibility

Page 24: Building Your Data Warehouse with Amazon Redshift

Scalar UDF example – URL parsing

CREATE FUNCTION f_hostname (VARCHAR url)

  RETURNS varchar

IMMUTABLE AS $$

  import urlparse

  return urlparse.urlparse(url).hostname

$$ LANGUAGE plpythonu;

Page 25: Building Your Data Warehouse with Amazon Redshift

Interleaved Multi Column Sort

•  Currently support Compound Sort Keys –  Optimized for applications that filter data by one leading

column

•  Adding support for Interleaved Sort Keys –  Optimized for filtering data by up to eight columns

–  No storage overhead unlike an index

–  Lower maintenance penalty compared to indexes

Page 26: Building Your Data Warehouse with Amazon Redshift

Compound Sort Keys Illustrated

Records in Redshift are stored in blocks. For this illustration, let’s assume that four records fill a block Records with a given cust_id are all in one block However, records with a given prod_id are spread across four blocks

1 1

1 1

2

3

4

1 4

4 4

2

3

4

4

1 3

3 3

2

3

4

3

1 2

2 2

2

3

4

2

1

1  [1,1] [1,2] [1,3] [1,4]

2  [2,1] [2,2] [2,3] [2,4]

3  [3,1] [3,2] [3,3] [3,4]

4  [4,1] [4,2] [4,3] [4,4]

1 2 3 4 prod_id

cust_id

cust_id prod_id other columns blocks

Page 27: Building Your Data Warehouse with Amazon Redshift

1  [1,1] [1,2] [1,3] [1,4]

2  [2,1] [2,2] [2,3] [2,4]

3  [3,1] [3,2] [3,3] [3,4]

4  [4,1] [4,2] [4,3] [4,4]

1 2 3 4 prod_id

cust_id

Interleaved Sort Keys Illustrated

Records with a given cust_id are spread across two blocks Records with a given prod_id are also spread across two blocks

Data is sorted in equal measures for both keys

1 1

2 2

2

1

2

3 3

4 4

4

3

4

3

1 3

4 4

2

1

2

3

3 1

2 2

4

3

4

1

1

cust_id prod_id other columns blocks

Page 28: Building Your Data Warehouse with Amazon Redshift

How to use the feature

•  New keyword ‘INTERLEAVED’ when defining sort keys –  Existing syntax will still work and behavior is unchanged

–  You can choose up to 8 columns to include and can query with any or all of them

•  No change needed to queries

•  Benefits are significant

[ SORTKEY [ COMPOUND | INTERLEAVED ] ( column_name [, ...] ) ]

Page 29: Building Your Data Warehouse with Amazon Redshift

Amazon Redshift

Spend time with your data, not your database….

Page 30: Building Your Data Warehouse with Amazon Redshift

Gaming analytics with Redshift and AWS

Page 31: Building Your Data Warehouse with Amazon Redshift

Former CTO Mind Candy / Moshi Monsters (2006-2012)

Introductions

Co-founder / CTO Space Ape Games (2012-Present)

Toby Moore

Page 32: Building Your Data Warehouse with Amazon Redshift

Space Ape Games

Page 33: Building Your Data Warehouse with Amazon Redshift

12+ Million downloads 300k DAU

Coming soon!

Our games

Page 34: Building Your Data Warehouse with Amazon Redshift

Early needs + Approach •  Highly empowered, analytical team •  We hit a wall with 3rd party analytics tools •  Big data is table stakes in games industry •  We needed absolute flexibility on future tooling •  No large capex spend

Pre-Data

Not Enough

Too much Tactical Predictive

Page 35: Building Your Data Warehouse with Amazon Redshift

Basic data

ANALYSIS

REPORTING

Data Capture

A/B Tests

Insights & Learning

Amazon S3

Amazon Redshift

Amazon EMR

Page 36: Building Your Data Warehouse with Amazon Redshift

Tactical data

ANALYSIS

REPORTING

CRM Data Capture

A/B Tests

Amazon S3

Amazon Redshift

Amazon EMR

Page 37: Building Your Data Warehouse with Amazon Redshift

DATA MINING

ANALYSIS

REPORTING

CRM

Data Capture

A/B Tests

Insights & Learning

Amazon S3

Amazon Redshift

Amazon EMR

Predictive data

MODELLING

Page 38: Building Your Data Warehouse with Amazon Redshift

•  146 Billion Rows

•  2 clusters: 1x 8 node 1 x 16 node dw1.xlarge

•  13TB Of Compressed Data

•  250m rows x 125 columns Rows Per Day

Today

Page 39: Building Your Data Warehouse with Amazon Redshift

Per  user,  Daily  Summary  (Over  200  Metrics)  

Spend Tier

In Game Behaviour

Monetisation

Device

Tenure

Balances

Page 40: Building Your Data Warehouse with Amazon Redshift

Single  Player  View  

Platform

Spend Behaviour Device

Retention Country Language

Acquisition Channel

Game Balances

Operating System

Castle Level

Retention

Page 41: Building Your Data Warehouse with Amazon Redshift

Modeling  and  predic%on  

Best offer bundle

Spend tier

Churn risk

Life time value

Spend propensity Price optimisation

Page 42: Building Your Data Warehouse with Amazon Redshift

Never had to worry about: •  Scalability •  Backing up •  Availability •  Upgrades •  Flexibility (ODBC etc.) •  Performance

Summary

•  Move towards more real-time processing •  Investigate machine learning •  AWS Mobile analytics auto-export to Redshift

Next Steps

Page 43: Building Your Data Warehouse with Amazon Redshift

Thanks!

Page 44: Building Your Data Warehouse with Amazon Redshift

LONDON