Abstract - · PDF fileBeam’s Python SDK currently runs on Direct runner & Google Cloud Dataflow Beam Vision: as of May 2017 Beam Model: Fn Runners ... Apache Beam Hacking Time

Post on 17-Mar-2018

216 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

Transcript

Abstract

The world of Big Data involves an ever increasing field of players, from storage systems to processing engines and distributed programming models. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a standard for expressing both batch and streaming data processing pipelines in a variety of languages across a variety of platforms and engines.

In this talk, we will show how Beam gives users the flexibility to choose the best environment for their needs and read data from any storage system; allows any Big Data API to execute in multiple environments; allows any processing engines to support multiple domain-specific user communities; and allows any storage system to read/write process data at massive scale. In a way, Apache Beam is a glue that connects the Big Data ecosystem together; it enables “anything to run anywhere”.

Apache Beam:Integrating the Big Data Ecosystem Up, Down, and Sideways

Davor BonaciPMC Chair, Apache BeamSoftware Engineer, Google

Jean-Baptiste OnofréPMC Member, Apache Beam

Software Architect, Talend

Apache Beam: Open Source data processing APIs

● Expresses data-parallel batch and streaming algorithms using one unified API

● Cleanly separates data processing logic from runtime requirements

● Supports execution on multiple distributed processing runtime environments

Apache Beam isa unified programming model

designed to provideefficient and portable

data processing pipelines

Announcing the first stable release

Apache Beam at this conference

● Using Apache Beam for Batch, Streaming, and Everything in Between○ Dan Halperin @ 10:15 am

● Apache Beam: Integrating the Big Data Ecosystem Up, Down, and Sideways○ Davor Bonaci, and Jean-Baptiste Onofré @ 11:15 am

● Concrete Big Data Use Cases Implemented with Apache Beam○ Jean-Baptiste Onofré @ 12:15 pm

● Nexmark, a Unified Framework to Evaluate Big Data Processing Systems○ Ismaël Mejía, and Etienne Chauchot @ 2:30 pm

Apache Beam at this conference

● Apache Beam Birds of a Feather○ Wednesday, 6:30 pm - 7:30 pm

● Apache Beam Hacking Time○ Time: all-day Thursday○ 2nd floor, collaboration area○ (depending on interest)

Agenda

1. Expressing data-parallel pipelines with the Beam model

2. The Beam vision for portability

3. Parallel and portable pipelines in practice

4. Extensibility to integrate the entire Big Data ecosystem

Expressing data-parallel pipelines with the Beam modelA unified model for batch and stream data processing

Processing time vs. event time

The Beam Model: asking the right questions

What results are calculated?

Where in event time are results calculated?

When in processing time are results materialized?

How do refinements of results relate?

PCollection<KV<String, Integer>> scores = input

.apply(Sum.integersPerKey());

The Beam Model: What is being computed?

The Beam Model: What is being computed?

PCollection<KV<String, Integer>> scores = input .apply(Window.into(FixedWindows.of(Duration.standardMinutes(2)))

.apply(Sum.integersPerKey());

The Beam Model: Where in event time?

The Beam Model: Where in event time?

PCollection<KV<String, Integer>> scores = input .apply(Window.into(FixedWindows.of(Duration.standardMinutes(2))

.triggering(AtWatermark()))

.apply(Sum.integersPerKey());

The Beam Model: When in processing time?

The Beam Model: When in processing time?

PCollection<KV<String, Integer>> scores = input .apply(Window.into(FixedWindows.of(Duration.standardMinutes(2)) .triggering(AtWatermark() .withEarlyFirings( AtPeriod(Duration.standardMinutes(1))) .withLateFirings(AtCount(1))) .accumulatingFiredPanes())

.apply(Sum.integersPerKey());

The Beam Model: How do refinements relate?

The Beam Model: How do refinements relate?

Customizing What Where When How

3Streaming

4Streaming

+ Accumulation

1Classic Batch

2Windowed

Batch

The Beam vision for portability

Write once, run anywhere“ ”

Beam Vision: mix and match SDKs and runtimes● The Beam Model: the abstractions

at the core of Apache Beam

Runner 1 Runner 3Runner 2

● Choice of SDK: Users write their pipelines in a language that’s familiar and integrated with their other tooling

● Choice of Runners: Users choose the right runtime for their current needs -- on-prem / cloud, open source / not, fully managed / not

● Scalability for Developers: Clean APIs allow developers to contribute modules independently

The Beam Model

Language A Language CLanguage B

The Beam Model

Language A SDK

Language C SDK

Language B SDK

● Beam’s Java SDK runs on multiple runtime environments, including:○ Apache Apex○ Apache Spark○ Apache Flink ○ Google Cloud Dataflow○ [in development] Apache Gearpump

● Cross-language infrastructure is in progress.○ Beam’s Python SDK currently runs

on Direct runner & Google Cloud Dataflow

Beam Vision: as of May 2017

Beam Model: Fn Runners

Apache Spark

Cloud Dataflow

Beam Model: Pipeline Construction

Apache Flink

Java

Java

Python

Python

Apache Apex

Apache Gearpump

Example Beam Runners

Apache Spark

● Open-source cluster-computing framework

● Large ecosystem of APIs and tools

● Runs on premise or in the cloud

Apache Flink

● Open-source distributed data processing engine

● High-throughput and low-latency stream processing

● Runs on premise or in the cloud

Google Cloud Dataflow

● Fully-managed service for batch and stream data processing

● Provides dynamic auto-scaling, monitoring tools, and tight integration with Google Cloud Platform

How do you build an abstraction layer?

Apache Spark

Cloud Dataflow

Apache Flink

????????

????????

Beam: the intersection of runner functionality?

Beam: the union of runner functionality?

Beam: the future!

Categorizing Runner Capabilities

https://beam.apache.org/ documentation/runners/capability-matrix/

Parallel and portable pipelines in practiceA Use Case

Extensibility to integrate the entire Big Data ecosystem

IntegratingUp, Down, and Sideways

“”

Extensibility points

● Software Development Kits (SDKs)● Runners

● Domain-specific extensions (DSLs)● Libraries of transformations● IOs● File systems

Software Development Kits (SDKs)

Runner 1 Runner 3Runner 2

The Beam Model

Language A SDK

Language C SDK

Language B SDK

Runners

Runner 1 Runner 3Runner 2

The Beam Model

Language A SDK

Language C SDK

Language B SDK

Domain-specific extensions (DSLs)

The Beam Model

Language A SDK

Language C SDK

Language B SDK

DSL 2 DSL 3DSL 1

Libraries of transformations

The Beam Model

Language A SDK

Language C SDK

Language B SDK

Library 2 Library 3Library 1

IO connectors

The Beam Model

Language A SDK

Language C SDK

Language B SDK

IO connector

2

IO connector

3

IO connector

1

File systems

The Beam Model

Language A SDK

Language C SDK

Language B SDK

File system 2

File system 3

File system 1

Ecosystem integration

● I have an engine→ write a Beam runner

● I want to extend Beam to new languages→ write an SDK

● I want to adopt an SDK to a target audience→ write a DSL

● I want a component can be a part of a bigger data-processing pipeline→ write a library of transformations

● I have a data storage or messaging system→ write an IO connector or a file system connector

Apache Beam isa glue that integrates

the big data ecosystem

Learn more and get involved!

Attend a birds-of-a-feather session later today!

Apache Beamhttps://beam.apache.org

Join the Beam mailing lists!user-subscribe@beam.apache.orgdev-subscribe@beam.apache.org

Follow @ApacheBeam on Twitter

Apache Beam at this conference

● Using Apache Beam for Batch, Streaming, and Everything in Between○ Dan Halperin @ 10:15 am

● Apache Beam: Integrating the Big Data Ecosystem Up, Down, and Sideways○ Davor Bonaci, and Jean-Baptiste Onofré @ 11:15 am

● Concrete Big Data Use Cases Implemented with Apache Beam○ Jean-Baptiste Onofré @ 12:15 pm

● Nexmark, a Unified Framework to Evaluate Big Data Processing Systems○ Ismael Mejia, and Etienne Chauchot @ 2:30 pm

Apache Beam at this conference

● Apache Beam Birds of a Feather○ Wednesday, 6:30 pm - 7:30 pm

● Apache Beam Hacking Time○ Time: all-day Thursday○ 2nd floor, collaboration area○ (depending on interest)

top related