Top Banner
Proprietary and confidential. Do not distribute. Catalyzing Deep Learning's Impact in the Enterprise SEPTEMBER 2016 MAKING MACHINES SMARTER.
21

RE-Work Deep Learning Summit - September 2016

Apr 12, 2017

Download

Technology

Nervana Systems
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

Catalyzing Deep Learning's Impact in the Enterprise

SEPTEMBER 2016

MAKING MACHINES SMARTER.™

Page 2: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

AI can transform the world for the better

2

SAFER HEALTHIER HAPPIER

Page 3: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

But how does your enterprise win in this space?

3

Page 4: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

1. Use an existing model

4

SegNet

AIICNN

bAbI Q&A

GoogLeNet

Alexnet

Deep Speech 2

VGG

Video Activity Detection

Sentiment AnalysisSpeech RecognitionDeep Reinforcement Learning

Object Localization

Page 5: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

1. Use an existing model

5

Image classification

car

roadsidewalk

buildingtree

bicyclist

street sign

Video activity detection

Page 6: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

2. Train on your data

6

Cat!

Dog!

Cat???

Dog???

Page 7: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

TALENT POOL

Talent is tight

7

Page 8: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

8

3. Use Professional Services

GIVE A PERSON A FISH

Page 9: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

4. Train your staff

9

TEACH THEMHOW TO FISH

Page 10: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

5. Evaluate lock-in

10

Page 11: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

6. If your org requires it - have an on premise plan

11

Buys and provisions

enterprise brands piecemeal

Managed with VMware

Organized and managed with

vblocks

Commodity hardware + OpenStack

Uses Public Cloud

Page 12: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

6. …or go straight to a cloud service

12

Buys and provisions

enterprise brands piecemeal

Managed with VMware

Organized and managed with

vblocks

Commodity hardware + OpenStack

Uses Public Cloud

SKIP UNNECESSARY SPENDING

Page 13: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

7. Use a scalable cloud service

13

Page 14: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

14

7. Use a scalable cloud service

Page 15: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

8. Embrace ludicrous speed

15

DEEP LEARNING PROCESSOR

55 TeraOps

Page 16: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

8. Embrace ludicrous speed

16

GPUs:Graphics :: DL-PUs:AISource:Twitter @Bill_Gross

Page 17: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

9. Plan a Data Strategy

17

55 TOps

Page 18: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

10. Have a deployment strategy

18

Page 19: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

Nervana has you covered

19

neon cloud + platform

Revised 8/1/2016 Page 1 of 6

Nervana DLA: The world’s most advanced deep

learning platform — now in a box!

The Nervana Deep Learning

Platform is a full-stack, user-

friendly, turnkey system that

enables businesses to develop

and deploy high-accuracy artificial

intelligence solutions in record

time. Nervana Cloud is a hosted

version of the Nervana Deep

Learning Platform that enables all

the innovation that the cloud

provides while incorporating

privacy and data protection best

practices to keep your data safe.

For companies that prefer to keep

their data onsite for regulatory or

business reasons, Nervana offers

the Nervana DLA appliance as an

alternative to Nervana Cloud.

The Nervana DLA provides the

same deep learning capabilities as

Nervana Cloud via a secure,

physical server that can be

integrated directly into your IT

infrastructure along with your data

storage and identity management

systems (Figure 1). Nervana’s

intelligent deep learning management and scheduling software allows you to stack one or more

Nervana DLAs in order to share their considerable compute resources across teams.

The Nervana DLA is available in two models — the 8G and the 8E. The Nervana DLA 8G

includes eight NVIDIA Geforce GTX Titan X Maxwell GPUs, and the Nervana DLA 8E includes

eight Nervana Engines — custom ASICs that are optimized for deep learning. This data sheet

summarizes the key capabilities of the Nervana DLA, including:

● Software that streamlines the complete deep learning lifecycle

● Hardware and software designed for maximum speed

● Built-in stackability to grow with your business

Revised 8/1/2016 Page 5 of 6

Figure 4. Nervana Engines can be interconnected via bi-directional high-bandwidth links to achieve near-linear training speedup across multiple Nervana DLA 8E appliances.

Built-in stackability to grow with your business The Nervana DLA was designed from the ground up for “stackability” so it is easy to extend your deep learning capabilities as your business needs require. We already discussed the speed advantage associated with interconnecting multiple Nervana DLA 8E appliances. The DLA boasts a number of other stackability features (Table 3) that make it easy to expand your deep learning infrastructure. Table 3. Stackability Features and Benefits

Feature Benefit

Self-discovery Horizontal scaling of the Nervana DLA appliance family is as easy as adding new DLAs to your cluster. Nervana’s Appliance Management tools automatically discover the new box’s capabilities and add them to the resource pool.

Smart policy-driven job scheduling

Administrators can specify priority and resource constraints on a per-user or per-group basis. Nervana’s acclaimed distributed deep learning support takes full advantage of the cluster, automatically scheduling distributed training runs on multiple DLA servers when needed, while taking into account resource availability and cluster topology.

High availability and redundancy

Nervana DLA clusters can be configured redundantly, and neon-based tasks are checkpointed for easy migration in case of hardware failures, ensuring uninterrupted training and inference execution.

Hybrid cloud Nervana DLA operates in either air-gapped or hybrid cloud modes. When in air-gapped mode, no off-premise network connections are required. Hybrid cloud mode allows workloads to be seamlessly dispatched to either on-prem DLAs or to Nervana Cloud, based on privacy and resource policies.

engine

Page 20: RE-Work Deep Learning Summit - September 2016

Proprietary and confidential. Do not distribute.

Nervana in action

20

Healthcare: Tumor detection

Automotive: Speech interfaces Finance: Time-series search engine

Positive:

Negative:

Agricultural Robotics Oil & Gas

Positive:

Negative:

Proteomics: Sequence analysis

Query:

Results:

Page 21: RE-Work Deep Learning Summit - September 2016

NERVANA