Top Banner
Par4All Open source parallelization for heterogeneous computing OpenCL & more Ronan KERYELL 3 ([email protected]) HPC Project 9 Route du Colonel Marcel Moraine 1 92360 Meudon La Forêt, France Rond Point Benjamin Franklin 2 34000 Montpellier, France Wild Systems, Inc. 5201 Great America Parkway #3241 3 Santa Clara, CA 95054, USA 24/01/2012
54

Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

May 08, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

Par4All—

Open source parallelization forheterogeneous computing

OpenCL & more

Ronan KERYELL3 ([email protected])

HPC Project—

9 Route du Colonel Marcel Moraine1

92360 Meudon La Forêt, France—

Rond Point Benjamin Franklin2

34000 Montpellier, France—

Wild Systems, Inc.5201 Great America Parkway #32413

Santa Clara, CA 95054, USA

24/01/2012

Page 2: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

• I

SOLOMON

• Target application: “datareduction, communication,character recognition,optimization, guidance andcontrol, orbit calculations,hydrodynamics, heat flow,diffusion, radar dataprocessing, and numericalweather forecasting”

• Diode + transistor logic in10-pin TO5 package

Daniel L. Slotnick. « TheSOLOMON computer. »Proceedings of the December4-6, 1962, fall joint computerconference. p. 97–107. 1962

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 2 / 52

Page 3: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

• I

POMP & PompC @ LI/ENS 1987–1992

ID

Control

Video

RG

BAlpha

DACPAV

512 KB

@D

HyperCom

I

@I

88100

@D

DRAM

RAM

RAM

VME Bus

RAM SCSI

88100

I/O

ScalarReduction

FIFO

Scalar broadcast

Scalar Data

Scalar code

Vectorial code

Scalar processor

Network

Up to 256

Globalexception

Parallel Processor

SRAM

Host

LIW

cod

e

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 3 / 52

Page 4: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

• I

HyperParallel Technologies (1992–1998)

• Parallel computer• Proprietary 3D-torus network• DEC Alpha 21064 + FPGA• HyperC (follow-up of PompC @ LI/ENS Ulm)

I PGAS (Partitioned Global Address Space) languageI An ancestor of UPC...

• Already on the Saclay Plateau ! ,

Quite simple business model

• Customers need just to rewrite all their code in HyperC ,

• Difficult entry cost... /

• Niche market... /

• American subsidiary with dataparallel datamining applicationacquired by Yahoo! in 1998

• Closed technology ; lost for customers and... founders /�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 4 / 52

Page 5: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

• I

HyperParallel Technologies (1992–1998)

• Parallel computer• Proprietary 3D-torus network• DEC Alpha 21064 + FPGA• HyperC (follow-up of PompC @ LI/ENS Ulm)

I PGAS (Partitioned Global Address Space) languageI An ancestor of UPC...

• Already on the Saclay Plateau ! ,

Quite simple business model

• Customers need just to rewrite all their code in HyperC ,

• Difficult entry cost... /

• Niche market... /

• American subsidiary with dataparallel datamining applicationacquired by Yahoo! in 1998

• Closed technology ; lost for customers and... founders /�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 4 / 52

Page 6: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

• I

Present motivations: reinterpreting Moore’s law (I)

The good news ,

• Number of transistors still increasing• Memory storage increasing (DRAM, FLASH...)• Hard disk storage increasing• Processors (with captors) everywhere• Network is increasing

• The bad news /I Transistors are so small they leak... Static consumptionI Superscalar and cache are less efficient compared to transistor

budgetI Storing and moving information is expensive, computing is cheap:

change in algorithms...I Light’s speed has not improved for a while... Hard to reduce latency

� Chips are too big to be globally synchronous at multi GHz /

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 5 / 52

Page 7: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

• I

Present motivations: reinterpreting Moore’s law (II)

I pJ and physics become very fashionableI Power efficiency in O( 1

f )

� Transistors cannot be used at full speed without melting /±I I/O and pin counts

� Huge time and energy cost to move information outside the chip /

Parallelism is the only way to go...

Research is just crossing reality!

No one size fit all...Future will be heterogeneous: GPGPU, Cell, vector/SIMD, FPGA,PIM...But compilers are always behind... /

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 6 / 52

Page 8: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•HPC Project I

Outline

1 HPC Project2 Par4All3 Scilab to OpenMP, CUDA & OpenCL

4 Results

5 Conclusion

6 Table des matières

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 7 / 52

Page 9: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•HPC Project I

HPC Project emergence

; 2006: Time to be back in parallelism!

Yet another start-up... ,

• People that met ≈ 1990 at the French Parallel Computing militarylab SEH/ETCA

• Later became researchers in Computer Science, CINES directorand ex-CEA/DAM, venture capital and more: ex-CEO of ThalesComputer, HP marketing...

• HPC Project launched in December 2007• ≈ 30 colleagues in France (Montpellier, Meudon), Canada

(Montréal with Parallel Geometry) & USA (Santa Clara, CA)

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 8 / 52

Page 10: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•HPC Project I

HPC Project hardware: WildNode from WildSystems

Through its Wild Systems subsidiary company• WildNode hardware desktop accelerator

I Low noise for in-office operationI x86 manycoreI nVidia Tesla GPU ComputingI Linux & Windows

• WildHiveI Aggregate 2-4 nodes with 2 possible memory views

� Distributed memory with Ethernet or InfiniBand

http://www.wild-systems.com�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 9 / 52

Page 11: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•HPC Project I

HPC Project software and services

• Parallelize and optimize customer applications, co-branded as abundle product in a WildNode (e.g. Presagis Stage battle-fieldsimulator, Wild Cruncher for Scilab//...)

• Acceleration software for the WildNodeI CPU+GPU-accelerated libraries for

C/Fortran/Scilab/Matlab/Octave/RI Automatic parallelization for Scilab, C, Fortran...I Transparent execution on the WildNode

• Par4All automatic parallelization tool• Remote display software for Windows on the WildNode

HPC consulting

• Optimization and parallelization of applications• High Performance?... not only TOP500-class systems:

power-efficiency, embedded systems, green computing...• ; Embedded system and application design• Training in parallel programming (OpenMP, MPI, TBB, CUDA,

OpenCL...)�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 10 / 52

Page 12: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Outline

1 HPC Project2 Par4All3 Scilab to OpenMP, CUDA & OpenCL

4 Results

5 Conclusion

6 Table des matières

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 11 / 52

Page 13: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

We need software tools

• HPC Project needs tools for its hardware accelerators (WildNodes from Wild Systems) and to parallelize, port & optimizecustomer applications

• Application development: long-term business ; long-termcommitment in a tool that needs to survive to (too fast)technology change

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 12 / 52

Page 14: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Expressing parallelism ?

• Solution librariesI Need to fit your application

• New parallel languagesI Rewrite your applications...

• Extend sequential language with #pragmaI Nicer transition

• Hide parallelism in object oriented classesI Restructure your applications...

• Use magical automatic parallelizer

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 13 / 52

Page 15: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Automatic parallelization

• Major research failure from the past...• Untractable in the general case /

• Bad sequential programs? GIGO: Garbage In-Garbage Out...• But technology widely used locally in main compilers• To use #pragma, // languages or classes: cleaner sequential

program or algorithm first...• ... and then automatic parallelization can often work ,

• ; Par4All = automatic parallelization + coding rules• Often less optimal performance but better time-to-market

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 14 / 52

Page 16: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Basic Par4All coding rules for good parallelization(I)

• Develop a coding rule manual to help parallelization and...sequential quality!

• Par4All parallelizes loop-nests made from Fortran DO or C99 forloops similar to DO-loops

• Same constraints as for-loop accepted in OpenMP standard• for ([int] init-expr; var relational-op b; incr-expr)statement

• Increment and bounds: integer expressions, loop-invariant• relational-op only <, <=, >=, >• Do not modify loop index inside loop body• Do not use assert() or compile with -DNDEBUG inside a loop.

Assert has potential exit effect• No goto outside the loop, break, continue

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 15 / 52

Page 17: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Basic Par4All coding rules for good parallelization(II)

• No exit(), longjump(), setcontext()...• Data structures

I Pointers� Do not use pointer arithmetics

I Arrays� PIPS uses integer polyhedron lattice in analysis, so us affine

reference in parallelizable code

1 // Good:a[2*i-3+m][3*i-j+6*n]

3 // Bad (polynomial ) :a[2*i*j][m*n-i+j]

� Do not use linearized arrays

• Do not use recursion• Prototype of coding rules report on-line on par4all.org

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 16 / 52

Page 18: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

p4a in a nutshell (I)

Parallelisationp4a matmul.f

generates an OpenMP program in matmul.p4a.f

1 !$omp parallel do p r i v a t e (I, K, X)2 C multiply the two square matrices of ones

DO J = 1, N4 !$omp parallel do p r i v a t e (K, X)

DO I = 1, N6 X = 0

!$omp parallel do reduction (+:X)8 DO K = 1, N

X = X+A(I,K)*B(K,J)10 ENDDO

!$omp end parallel do12 C(I,J) = X

ENDDO14 !$omp end parallel do

ENDDO16 !$omp end parallel do

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 17 / 52

Page 19: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

p4a in a nutshell (II)

Parallelisation with compilationp4a matmul.f -o matmul

generates an OpenMP program matmul.p4a.f that is compiled withgcc into matmul

CUDA generation with compilationp4a --cuda saxpy.c -o s

generates a CUDA program that is compiled with nvcc

OpenCL generation with compilationp4a --opencl saxpy.c -o s

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 18 / 52

Page 20: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Basic GPU execution model

A sequential program on a host launches computational-intensive ker-nels on a GPU• Allocate storage on the GPU• Copy-in data from the host to the GPU• Launch the kernel on the GPU• The host waits...• Copy-out the results from the GPU to the host• Deallocate the storage on the GPU

Generic scheme for other heterogeneous accelerators too

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 19 / 52

Page 21: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Rely on PIPS (I)

• PIPS (Interprocedural Parallelizer of Scientific Programs): OpenSource project from Mines ParisTech... 23-year old! ,

• Funded by many people (French DoD, Industry & ResearchDepartments, University, CEA, IFP, Onera, ANR (French NSF),European projects, regional research clusters...)

• One of the project that introduced polytope model-basedcompilation

• ≈ 456 KLOC according to David A. Wheeler’s SLOCCount• ... but modular and sensible approach to pass through the years

I ≈300 phases (parsers, analyzers, transformations, optimizers,parallelizers, code generators, pretty-printers...) that can becombined for the right purpose

I Abstract interpretation

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 20 / 52

Page 22: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Rely on PIPS (II)

I Polytope lattice (sparse linear algebra) used for semanticsanalysis, transformations, code generation... with approximationsto deal with big programs, not only

I NewGen object description language for language-agnosticautomatic generation of methods, persistence, object introspection,visitors, accessors, constructors, XML marshaling for interfacingwith external tools...

I Interprocedural à la make engine to chain the phases as needed.Lazy construction of resources

I On-going efforts to extend the semantics analysis for C

• Around 15 programmers currently developing in PIPS (MinesParisTech, HPC Project, IT SudParis, TÉLÉCOM Bretagne, RPI)with public svn, Trac, git, mailing lists, IRC, Plone, Skype... anduse it for many projects

• But still...I Huge need of documentation (even if PIPS uses literate

programming...)

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 21 / 52

Page 23: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Rely on PIPS (III)

I Need of industrializationI Need further communication to increase community size

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 22 / 52

Page 24: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Current PIPS usage

• Automatic parallelization (Par4All C & Fortran to OpenMP)• Distributed memory computing with OpenMP-to-MPI translation

[STEP project]• Generic vectorization for SIMD instructions (SSE, VMX, Neon,

CUDA, OpenCL...) (SAC project) [SCALOPES]• Parallelization for embedded systems [SCALOPES, SMECY]• Compilation for hardware accelerators (Ter@PIX, SPoC, SIMD,

FPGA...) [FREIA, SCALOPES]• High-level hardware accelerators synthesis generation for FPGA

[PHRASE, CoMap]• Reverse engineering & decompiler (reconstruction from binary to

C)• Genetic algorithm-based optimization [Luxembourg

university+TB]• Code instrumentation for performance measures• GPU with CUDA & OpenCL [TransMedi@, FREIA, OpenGPU]

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 23 / 52

Page 25: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Automatic parallelization

Most fundamental for a parallel execution

Finding parallelism!

Several parallelization algorithms are available in PIPS

• For example classical Allen & Kennedy use loop distributionmore vector-oriented than kernel-oriented (or need laterloop-fusion)

• Coarse grain parallelization based on the independence of arrayregions used by different loop iterationsI Currently used because generates GPU-friendly coarse-grain

parallelismI Accept complex control code without if-conversion

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 24 / 52

Page 26: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Outlining (I)

Parallel code ; Kernel code on GPU

• Need to extract parallel source code into kernel source code:outlining of parallel loop-nests

• Before:1 f o r (i = 1;i <= 499; i++)2 f o r (j = 1;j <= 499; j++) {

save[i][j] = 0.25*( space[i - 1][j] + space[i + 1][j]4 + space[i][j - 1] + space[i][j + 1]);

}

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 25 / 52

Page 27: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Outlining (II)

• After:1 p4a_kernel_launcher_0(space , save);

[...]3 void p4a_kernel_launcher_0(float_t space[SIZE][SIZE],

float_t save[SIZE][SIZE]) {5 f o r (i = 1; i <= 499; i += 1)

f o r (j = 1; j <= 499; j += 1)7 p4a_kernel_0(i, j, save , space );

}9 [...]

void p4a_kernel_0(float_t space[SIZE][SIZE],11 float_t save[SIZE][SIZE],

i n t i,13 i n t j) {

save[i][j] = 0.25*( space[i-1][j]+space[i+1][j]15 +space[i][j-1]+ space[i][j+1]);

}

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 26 / 52

Page 28: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

From array regions to GPU memory allocation (I)

• Memory accesses are summed up for each statement as regionsfor array accesses: integer polytope lattice

• There are regions for write access and regions for read access

• The regions can be exact if PIPS can prove that only thesepoints are accessed, or they can be inexact, if PIPS can only findan over-approximation of what is really accessed

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 27 / 52

Page 29: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

From array regions to GPU memory allocation (II)

Example

1 for ( i = 0 ; i <= n−1; i += 1)2 for ( j = i ; j <= n−1; j += 1)

h_A [ i ] [ j ] = 1 ;

can be decorated by PIPS with write array regions as:1 // <h_A[PHI1] [PHI2]−W−EXACT−{0<=PHI1, PHI2+1<=n, PHI1<=PHI2}>

for ( i = 0 ; i <= n−1; i += 1)3 // <h_A[PHI1] [PHI2]−W−EXACT−{PHI1==i , i<=PHI2, PHI2+1<=n, 0<=i}>

for ( j = i ; j <= n−1; j += 1)5 // <h_A[PHI1] [PHI2]−W−EXACT−{PHI1==i , PHI2==j , 0<=i , i<=j , 1+j<=n}>

h_A [ i ] [ j ] = 1 ;

• These read/write regions for a kernel are used to allocate with acudaMalloc() in the host code the memory used inside a kernel andto deallocate it later with a cudaFree()

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 28 / 52

Page 30: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Communication generation

More subtle approach

PIPS gives 2 very interesting region types for this purpose• In-region abstracts what really needed by a statement• Out-region abstracts what really produced by a statement to be

used later elsewhere

• In-Out regions can directly be translated with CUDA intoI copy-in

1 cudaMemcpy(accel_address , host_address ,2 size , cudaMemcpyHostToDevice)

I copy-out

1 cudaMemcpy(host_address , accel_address ,2 size , cudaMemcpyDeviceToHost)

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 29 / 52

Page 31: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Loop normalization

• Hardware accelerators use fixed iteration space (thread indexstarting from 0...)

• Parallel loops: more general iteration space• Loop normalization

Before

1 f o r (i = 1;i < SIZE - 1; i++)2 f o r (j = 1;j < SIZE - 1; j++) {

save[i][j] = 0.25*( space[i - 1][j] + space[i + 1][j]4 + space[i][j - 1] + space[i][j + 1]);

}

After

1 f o r (i = 0;i < SIZE - 2; i++)f o r (j = 0;j < SIZE - 2; j++) {

3 save[i+1][j+1] = 0.25*( space[i][j + 1] + space[i + 2][j + 1]+ space[i + 1][j] + space[i + 1][j + 2]);

5 }

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 30 / 52

Page 32: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

From preconditions to iteration clamping (I)

• Parallel loop nests are compiled into a CUDA kernel wrapperlaunch

• The kernel wrapper itself gets its virtual processor index withsome blockIdx.x*blockDim.x + threadIdx.x

• Since only full blocks of threads are executed, if the number ofiterations in a given dimension is not a multiple of the blockDim,there are incomplete blocks /

• An incomplete block means that some index overrun occurs if allthe threads of the block are executed

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 31 / 52

Page 33: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

From preconditions to iteration clamping (II)

• So we need to generate code such as1 void p4a_kernel_wrapper_0( i n t k, i n t l,...)2 {

k = blockIdx.x*blockDim.x + threadIdx.x;4 l = blockIdx.y*blockDim.y + threadIdx.y;

i f (k >= 0 && k <= M - 1 && l >= 0 && l <= M - 1)6 kernel(k, l, ...);

}

But how to insert these guards?• The good news is that PIPS owns preconditions that are

predicates on integer variables. Preconditions at entry of thekernel are:

1 // P( i , j , k , l ) {0<=k , k<=63, 0<=l , l<=63}

• Guard ≡ directly translation in C of preconditions on loop indicesthat are GPU thread indices

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 32 / 52

Page 34: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Optimized reduction generation

• Reduction are common patterns that need special care to becorrectly parallelized

s =N∑

i=0

xi

• Reduction detection already implemented in PIPS• Generate #pragma omp reduce in Par4All• Generate GPU atomic operations

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 33 / 52

Page 35: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Communication optimization

• Naive approach : load/compute/store• Useless communications if a data on GPU is not used on host

between 2 kernels... /• ; Use static interprocedural data-flow communicationsI Fuse various GPU arrays : remove GPU (de)allocationI Remove redundant communications

; p4a --com-optimization option since version 1.2

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 34 / 52

Page 36: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Loop fusion

• Programs ≡ often a succession of (parallel) loops• Can be interesting to fuse loops together

I Important for array-oriented languages: Fortran 95, Scilab, C++parallel class...

I Factorize control : one loop with bigger content� More important for heterogeneous accelerators: reduce kernel launch

time� May avoid memory round trip� May cache recycling

• Use dependence graph, regions... to figure out when to fuse• Sensible parallel promotion of scalar code to reduce parallelism

interruption still to be implemented

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 35 / 52

Page 37: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Par4All Accel runtime (I)

• CUDA or OpenCL can not be directly represented in the internalrepresentation (IR, abstract syntax tree) such as __device__ or<<< >>>

• PIPS motto: keep the IR as simple as possible by design• Use some calls to intrinsics functions that can be represented

directly• Intrinsics functions are implemented with (macro-)functions

I p4a_accel.h has indeed currently 2 implementations� p4a_accel-CUDA.h than can be compiled with CUDA for nVidia GPU

execution� p4a_accel-OpenCL.h for OpenCL, written in C/CPP/C++� p4a_accel-OpenMP.h that can be compiled with an OpenMP compiler

for simulation on a (multicore) CPU

• Add CUDA support for complex numbers

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 36 / 52

Page 38: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Par4All I

Par4All Accel runtime (II)

• Can be used to simplify manual programming too (OpenCL...)I Manual radar electromagnetic simulation code @TBI One code targets CUDA/OpenCL/OpenMP

• OpenMP emulation for almost freeI Use Valgrind to debug GPU-like and communication code! (Nice

side effect of source-to-source...)I May even improve performance compared to native OpenMP

generation because of memory layout change

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 37 / 52

Page 39: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Scilab to OpenMP, CUDA & OpenCL I

Outline

1 HPC Project2 Par4All3 Scilab to OpenMP, CUDA & OpenCL

4 Results

5 Conclusion

6 Table des matières

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 38 / 52

Page 40: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Scilab to OpenMP, CUDA & OpenCL I

Scilab language

• Interpreted scientific language widely used like Matlab• Free software• Roots in free version of Matlab from the 80’s• Dynamic typing (scalars, vectors, (hyper)matrices, strings...)• Many scientific functions, graphics...• Double precision everywhere, even for loop indices (now)• Slow because everything decided at runtime, garbage collecting

I Implicit loops around each vector expression� Huge memory bandwidth used� Cache thrashing� Redundant control flow

• Strong commitment to develop Scilab through Scilab Enterprise,backed by a big user community, INRIA...

• HPC Project WildNode appliance with Scilab parallelization• Reuse Par4All infrastructure to parallelize the code

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 39 / 52

Page 41: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Scilab to OpenMP, CUDA & OpenCL I

Scilab & Matlab (I)

• Scilab/Matlab input : sequential or array syntax• Compilation to C code• Parallelization of the generated C code• Type inference to guess (crazy /) semanticsI Heuristic: first encountered type is forever

• Speedup > 1000 ,• Wild Cruncher: x86+GPU appliance with nice interface

I Scilab — mathematical model & simulationI Par4All — automatic parallelizationI //Geometry — polynomial-based 3D rendering & modelling

• Versions to compile to other platforms (fixed-point DSP...)

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 40 / 52

Page 42: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Scilab to OpenMP, CUDA & OpenCL I

Wild Cruncher — Scilab parallelization

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 41 / 52

Page 43: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Results I

Outline

1 HPC Project2 Par4All3 Scilab to OpenMP, CUDA & OpenCL

4 Results

5 Conclusion

6 Table des matières

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 42 / 52

Page 44: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Results I

Stars-PM

• Particle-Mesh N-body cosmological simulation• C code from Observatoire Astronomique de Strasbourg• Use FFT 3D• Example given in par4all.org distribution

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 43 / 52

Page 45: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Results I

Stars-PM time step

1 void iteration(coord pos[NP][NP][NP],2 coord vel[NP][NP][NP],

f l o a t dens[NP][NP][NP],4 i n t data[NP][NP][NP],

i n t histo[NP][NP][NP]) {6 /∗ Spl i t space into regular 3D grid : ∗/

discretisation(pos , data);8 /∗ Compute density on the grid : ∗/

histogram(data , histo );10 /∗ Compute attraction potentia l

in Fourier ’ s space : ∗/12 potential(histo , dens);

/∗ Compute in each dimension the resul t ing forces and14 integrate the acceleration to update the speeds : ∗/

forcex(dens , force);16 updatevel(vel , force , data , 0, dt);

forcey(dens , force);18 updatevel(vel , force , data , 1, dt);

forcez(dens , force);20 updatevel(vel , force , data , 2, dt);

/∗ Move the part ic les : ∗/22 updatepos(pos , vel);

}

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 44 / 52

Page 46: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Results I

Stars-PM & Jacobi results

• 2 Xeon Nehalem X5670 (12 cores @ 2,93 GHz)• 1 GPU nVidia Tesla C2050• Automatic call to CuFFT instead of FFTW (stubs...)• 150 iterations of Stars-PM

Speed-up p4a Simulation Cosmo. Jacobi323 643 1283

Sequential (time in s) (gcc -O3) 0.68 6.30 98.4 24.5OpenMP 6 threads --openmp 4.25 4.92 5.9 1.78CUDA base --cuda 0.77 1.21 3.13 0.36

Optim. comm. 1.1 --cuda--com-opt. 3.4 5.38 11 3.8

Reduction Optim. 1.1.2 --cuda--com-opt. 6.8 19.7 46.9 6.4

Manual optim. (gcc -O3) 13.6 24.2 54.7

p4a 1.1.2 introduce generation of CUDA atomic updates for PIPSdetected reductions. Other solution to investigate: CuDPP callgeneration

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 45 / 52

Page 47: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Results I

Benchmark results

With Par4All 1.2, CUDA 4.0, WildNode 2 Xeon Nehalem X5670 (12cores @ 2.93 GHz) with nVidia C2050

0.25x

0.5x

1x

2x

4x

8x

16x

32x

64x

128x

256x

2mm 3mm adi bicg correlationcovariance doitgen fdtd-2d gauss-filter gemm gemver gesummv

Polybench-2.0

OpenMP

4.1

4.0

2.9

1.2

6.1

6.1

6.1

1.6 2

.6

4.5 6

.6

2.6

HMPP-2.5.1

12

7

13

1

.3

13

.5

13

.5

1.1

9.8

.9

11

5

2.6

PGI-11.818

8

19

6

2.4

13

.9

14

.1

36

.5

6.3

15

6

2.7

par4all-naive

15

0

15

0

18

6

18

5

3.0

.5

14

4

1.9

.4

par4all-opt

21

4

21

6

2.4

.5

31

0

31

4

48

.2

9.5

4.7

21

1

6.5

.7

0.25x

0.5x

1x

2x

4x

8x

16x

32x

64x

128x

gramschmidtjacobi-1d jacobi-2d lu mvt symm-exp syrk syr2k hotspot99 lud99 srad99 Stars-PM Geo.Mean

Polybench-2.0 Rodinia

4.2 5

.9

1.4 2

.1

8.9

5.5

1.8

3.6

8.4 9.6

8.5

2.4

3.8

5

21

.1

1.0

10

.0

11

.3

3.5

26

.7

5.1

31

.6

3.0

0

4.0

6.7

3.7

32

.9

.3

1.0

2.1

5

.6

.3

6.4

.7

6.2

4.9

1.5

.5

1.4

3.6

2.2

2

2.7

4.9

10

.7

12

.5

6.6

51

.3

6.6

5.2

30

.5

7.3

19

.5

52

.0

14

.43

From par4all.org distribution, in examples/Benchmarks�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 46 / 52

Page 48: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Conclusion I

Outline

1 HPC Project2 Par4All3 Scilab to OpenMP, CUDA & OpenCL

4 Results

5 Conclusion

6 Table des matières

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 47 / 52

Page 49: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Conclusion I

Saint Cloud gatekeeper & massive virtual I/O

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 48 / 52

Page 50: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Conclusion I

Some events in the area

• Creation of « HPC & GPU Supercomputing Group of ParisMeetup »I 25/01/2012 @ Mines ParisTech, jardins du Luxembourg, ParisI + Par4All presentationI meetup.com/HPC-GPU-Supercomputing-Group-of-Paris-Meetup

• Wild Cruncher UVI Scilab parallelization on SGI UVI 14/02/2012, SGI breakfast @ Novell France, Tour Franklin, La

Défense

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 49 / 52

Page 51: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Conclusion I

Conclusion (I)

• GPU (and other heterogeneous accelerators): impressive peakperformances and memory bandwidth, power efficient

• Domain is maturing: any languages, libraries, applications,tools... Just choose the good one ,

• Real codes are often not well written to be parallelized... even byhuman being /

• At least writing clean C99/Fortran/Scilab... code should be aprerequisite

• Take a positive attitude. . . Parallelization is a good opportunityfor deep cleaning (refactoring, modernization. . . ) ; improvealso the original code

• Open standards to avoid sticking to some architectures• Need software tools and environments that will last through

business plans or companies

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 50 / 52

Page 52: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Conclusion I

Conclusion (II)

• Open implementations are a warranty for long time support for atechnology (cf. current tendency in military and national securityprojects)

• p4a motto: keep things simple• Open Source for community network effect• Easy way to begin with parallel programming• Source-to-source

I Give some programming examplesI Good start that can be reworked upon

• Entry cost

• Exit cost! /I Do not loose control on your code and your data !

• HPC Project is hiring ,

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 51 / 52

Page 53: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Table des matières I

Outline

1 HPC Project2 Par4All3 Scilab to OpenMP, CUDA & OpenCL

4 Results

5 Conclusion

6 Table des matières

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 52 / 52

Page 54: Par4All Open source parallelization for heterogeneous computingopengpu.net/EN/attachments/154_HiPEAC2012_OpenGPU_HPC-Project.pdf · Open source parallelization for heterogeneous computing

•Table des matières I

SOLOMON 2POMP & PompC @ LI/ENS 1987–1992 3HyperParallel Technologies (1992–1998) 4HyperParallel Technologies (1992–1998) 5Present motivations: reinterpreting Moore’s law 6

1 HPC ProjectOutline 8HPC Project emergence 9HPC Project hardware: WildNode from Wild Systems 10HPC Project software and services 11

2 Par4AllOutline 12We need software tools 13Expressing parallelism ? 14Automatic parallelization 15Basic Par4All coding rules for good parallelization 16p4a in a nutshell 18Basic GPU execution model 20Rely on PIPS 21Current PIPS usage 24Automatic parallelization 25Outlining 26From array regions to GPU memory allocation 28Communication generation 30Loop normalization 31

From preconditions to iteration clamping 32Optimized reduction generation 34Communication optimization 35Loop fusion 36Par4All Accel runtime 37

3 Scilab to OpenMP, CUDA & OpenCLOutline 39Scilab language 40Scilab & Matlab 41Wild Cruncher — Scilab parallelization 42

4 ResultsOutline 43Stars-PM 44Stars-PM time step 45Stars-PM & Jacobi results 46Benchmark results 47

5 ConclusionOutline 48Saint Cloud gatekeeper & massive virtual I/O 49Some events in the area 50Conclusion 51

6 Table des matièresOutline 53Vous êtes ici ! 54

�Par4All @ HiPEAC 2012 OpenGPU

HPC Project — Wild Systems R. KERYELL 52 / 52