Top Banner
Software Defined Hardware For data intensive computation 1 DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. Wade Shen DARPA I2O September 19, 2017
13

Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Sep 16, 2018

Download

Documents

dinhthuan
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Software Defined HardwareFor data intensive computation

1DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Wade ShenDARPA I2O

September 19, 2017

Page 2: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Goal Statement

Build runtime reconfigurable hardware and software that enables near ASIC performance (within 10x) without sacrificing programmability for data-intensive algorithms.

2DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 3: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Problem

3

The problem: Optimal hardware configuration differs across algorithms

0% 50% 100%

DGEMM

SPMM Logic/bus

FPU

Memory

• Processor design trades- Math/logic resources - Memory (cache vs. register vs. shared)- Address computation- Data access and flow

Bus/logic

Math

Memory(cache)

Memory(registers)

Memory(shared)

CPU GPU Specialized

No one hardware efficiently solves all problems wellSPMM = Sparse Matrix MultiplyDGEMM = Dense Matrix Multiply

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 4: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

✾SDH

SDH: Runtime optimization of software and hardwareFor data intensive computation

4

• One chip many applications- One time design cost

• Reprogrammable via high-level languages• Data-dependent optimization (10-100x)

Tomorrow: Runtime optimization of hardware and softwareToday: HW design specialization

• One chip per algorithm- Chip design expensive

• Not reprogrammable• Can’t take advantage of data-

dependent optimizations

0% 50% 100%

Google'sTPU

HIVESpecialized

Ener

gy E

fficie

ncy

[MOP

/mW

]

moreprogrammable

lessprogrammable

notprogrammable

CPUs CPUs+GPUsGP DSPsGPU

FPGA

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 5: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Software-defined hardware

5

Reconfigurable processors (TA1)Properties:1. Reconfiguration times: 300 - 1,000 ns2. Re-allocatable compute resources – i.e. ALUs for address computation or math3. Re-allocatable memory resources – i.e. cache/register configuration to match data4. Malleable external memory access – i.e. reconfigurable memory controller

Config1 Config2 Config3 Config4 ConfigN

High-level program

Dynamic HW/SW compilers for high-level languages (TA2)

Code1 Code2 CodeN

1. Generate optimal configuration based on static analysis code2. Generates optimal code3. Re-optimize machine code and processor configuration based on runtime data

Code3

Time

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 6: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

TA1: Reconfigurable processors

6

ALU

ALU

ALU

M

M

MAsyn

cM

emor

y Co

ntro

ller

Scratchpad for active search nodes

Graphicionado: graph search engine

Address calculators for sparse vector lookup

Performance: 157M edges/s/W search (BFS)

M

M

Bloc

k re

ad

Cont

rolle

r M

M

ALU ALU ALU

ALU ALU ALU

Eyeriss: Image neural net engineImage convolution operators

Image cachePerformance: 250 images/s/W (AlexNet)

Programmable Memory Controller

Programmable Memory Controller

MALU

M MALU

ALU

ALU ALUM

SDH

Plasticine: Stanford Seedling• Graph search: 102M edges/s/W• Image recognition: 130 images/s/W

Reconfigurable Interconnect

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 7: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

TA2: Compilers to build hardware and software

• Compilers generate optimal code via static analysis + tracing methods- Assume static processor configuration, compile code, run, trace, recompile

• SDH compilers don’t assume a static processor configuration- Generates optimal configuration/code given program + data- Problem: Resources and architecture optimization space is large

• Solution: 1. Configure initial processor configuration, compile code, run and trace, then2. Predict best configuration via reinforcement learning/stochastic optimization

7

High-level program

Processor

Code

Compiler

Runtime trace

High-level program

Processor Configuration

Code

SDH Compiler

Runtime trace

Reconfigurable Processor

PredictConfiguration

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 8: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

How will TA2 work?

8

High-level data programs

f(x) = warp(fft(window(x)))f(A, x) = Ax + b

Low-level implementations not exposed

Pre-compilation to IR (data-independent)

Optimization(data-dependent)

Empirical kernel mining from D3M ML corpus

Compile timeRun time

IR (intermediate representation) component optimization

DARPA D3M CorpusMAC

(Multiply-Accumulate)

+

+

=

=

X2

*

+

X1

X2

*

+

X1

MAC

1M

AC2

Axfft(y)

… MAC1 …… MAC2 …

Axfft(y)

… MAC2 …… MAC1 …

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 9: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Program evaluation and goals

• USG team will create a benchmark suite of machine learning, optimization, graph and numeric applications- 500+ programs from D3M program- Implementations for GPU and CPU- Subset of 100 optimized for ASIC (FPGA proxy)

• Metrics: - Speedup/power relative to ASIC and general purpose processors- Programmability: time to code solution for SDH languages vs. NumPy/Python

• Target outcomes:

9

vs. CPU vs. ASIC vs. ASIC (sparse math, graphs) Programmability

Phase 1 100-300x within 10x 2x within 3xPhase 2 500-1000x within 5x 8-10x ~1x

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 10: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

www.darpa.mil

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 11: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

Backup

11DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 12: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

12

TA1: Example architecture (U. Michigan Seedling)

Global cache vs local scratchpad

Merging crossbar vs data broadcasting

All cores for datathroughput vs fewer cores for memory

throughput

Sparse Computation Dense Computation

1. Outer product generation2. Merge outer products

1. Inner product on individual tiles2. Merge tiles

• Dramatic performance and energy opportunity by tailoring architectures to applications: e.g. sparse and dense matrix multiplication and graph algorithms

• Cache hierarchy, memory bandwidth, SIMD vs. MIMD, dedicated cores

• vs. CPU: 20 – 100x performance gain – data reuse, reduced memory bandwidth• vs. GPU: 10 – 50x performance gain – data movement & placement, async execution• vs. ASIC: within 2 – 3x of performance but full flexibility / programmability

Sparse Computation Dense Computation

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

Page 13: Software Defined Hardware - DARPA · Software Defined Hardware. For data intensive computation. DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. 1.

TA1: Fast reconfigurable processors

ASICs SDH

Eyeriss: Specialized DNN accelerator• 168 specialized convolution units• Specialized implementation of neural non-

linearity (ReLU)• Large block memory access only• 250 images/s/W on AlexNet

Graphicionado: ASIC for graph search • Element-based memory access• Specialized indirect address calculator for

sparse vectors• Specialized on-chip scratch pad• 157K edges/s/mW search (BFS)

SDH Opportunities (vs. CPU)

Attribute DenseAdvantage

SparseAdvantage

64 memory control units64 data pattern control units

30x Perf,52x Perf/W

8.2x Perf, 55.2x Perf/W

Configurable off-chip access for dense and sparse (scatter/gather) 1x 18x

Configurable on-chip memory for high BW & coarse-grain pipes 2.1x 18.4x

ComputePipelined SIMD 12.6x

NAVariable precision 25.2xShift network 7x

SPARSE: BFS on Twitter

102K edges/s/ mW

DENSE: AlexNet (full app.)130 images/s/W

Key Challenges:• Data flow: configurable memory control units, data patterns, data storage• Compute flexibility: compute granularity, modular functionality, high BW malleable interconnect

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.