Top Banner
State of Charm++ Laxmikant V. Kale
26

State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

Sep 22, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

State of Charm++

Laxmikant V. Kale

Page 2: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale 2

Page 3: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale 3

Page 4: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Charm++ is a robust system

• Release 6.3 last month.

• 19+ major releases over 20 years.

• Autobuild

• Build and test every night

• Local machines, major supercomputers, and NMI

• 300 functional tests, Over 50 system configurations

• Comprehensive set of tools:

• Projections: Performance visualization

• Debuggers: freeze-and-inspect, record-replay, ..

4

Page 5: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Some statistics for Charm++

Lines of code: 350,000 lines in Charm++ itself, ~30,000 in Projections, ~10,000 : the debugger.

generated using David A. Wheeler's 'SLOCCount'.

Popularity- 2900 distinct direct source downloads- 1400 distinct direct binary downloads- 40,000+ downloads for NAMD

5

Page 6: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Parallel Programming Laboratory

6

Page 7: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Parallel Programming Laboratory

7

Page 8: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

PPL with Collaborators

8

Page 9: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

A glance at history• 1987: Chare kernel arose from parallel Prolog work

• Dynamic load balancing for state-space search, Prolog, ...

• 1992: Charm++

• 1994: position paper on application oriented yet CS centered research

• NAMD: 1994, 1996

• Charm++ in almost current form: 1996-1998

• Chare arrays

• Measurement-based dynamic load balancing

• 1997: Rocket center: a trigger for AMPI

• 2001: Era of ITRs:

• Quantum chemistry collaboration: OpenAtom

• Computational astronomy collaboration: ChaNGa

• 2008: Multicore meets Petaflop/s, Blue Waters

• 2010: Collaborations, BigSim, scalability

9

Page 10: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

PPL Mission and Approach

• To enhance performance and productivity in programming complex parallel applications

• Performance: scalability to hundreds of thousands of processors

• Productivity: of human programmers

• Complex: irregular structure, dynamic variations

• Approach: application oriented yet CS centered research

• Developing enabling technology for a wide collection of apps

• Develop, use and test it in the context of real applications

10

Page 11: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Our guiding principles

• No magic: parallelizing compilers have achieved close to technical perfection but are not enough

• Sequential programs obscure too much information

• Seek an optimal division of labor between the system and the programmer

• Design abstractions based solidly on use-cases

L. V. Kale, "Application Oriented and Computer Science Centered HPCC Research", Developing a Computer Science Agenda for High-Performance Computing, New York, NY, USA, 1994, ACM Press, pp. 98-105.

11

Page 12: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Charm++ and CSE Applications

12

Page 13: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Migratable objects

User view System view

• Programmer: Overdecomposition into virtual processors • Runtime: Assign VPs to processors• Enables adaptive runtime strategies• Implementations: Charm++, AMPI

13

Page 14: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Over-decomposition and message-driven execution

Migratability

Introspective and adaptive runtime system

Higher-level abstractions

Control points

Automatic overlap, prefetch, compositionality

Scalable tools

BigSim

Fault tolerance

Dynamic load balancing (topology-aware, scalable)

Temperature/power considerations

Charisma, MSA, CharJ

14

Page 15: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

Highlights of recent results

Page 16: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Temperature-aware load balancing

• One objective: to save cooling energy

• Set CRAC thermostat at a higher temperature

• but control core temperature using DVFS

• This leads to load imbalances, but these can be handled via object migrations

16

Page 17: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Energy Savings with minimal timing penalty

17

Page 18: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Evolution of Biomolecular System Size

18

Page 19: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

100 million atoms on Jaguar

!"""#

$%"""#

&!"""#

!"""# $%"""# &!"""#

!"##$%

"&&

'()*#+&

'()*+# ,-.# /0123#45#6*778)7# /0123#452#6*778)7#

1. Enabled by parallel I/O implementation

2. Runs in SMP-mode of Charm++ perform better

3. Scale up to full Jaguar PF

19

Page 20: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Weak scaling on Intrepid

!"

#!"

$!"

%!"

&!"

'!!"

'#!"

($'" '!#$" #!$&" $!)%" &')#" '%(&$"

!"#$%&'()*

+,'$

)+,%-).+/"#+0

%12+34)5

+

6."*)7++89+1"#)7:."*);+

*+,"-./0" 1.12*+,"-./0"

~1466 atoms/core

20

Page 21: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

!"#$"%&•  '()*"+(,&-()./01&%/,(+&2/0&!+3(&4*)(05&•  '(6(+/7%(-)&)/&03-&*)&+*0#(&58*+(&(98"(-)+:&•  ;*-:&,"<(0(-)&/3)73)5&–  ="-1&5)*>5>85?&70/@(8>/-5?&35(0&70"-)5&A&

•  $)3,"(5B&C/7/+/#:D;*77"-#?&$:5)(%&E/"5(?&F/++(8>6(&G7>%"H*>/-&

0

50

100

150

200

250

300

0 1380 2760 4140 5520 6900

Uti

liza

tion (

%)

Time (us)

Improved All-to-All Algorithm’s Link Utilization

Link 7 (LR)Link 12 (LR)Link 24 (LR)

21

Page 22: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Mapping for Blue Waters

One supernode in the PERCS topology

LLLRD

Figure 1: The PERCS network – the left figure shows all to all connections within a supernode(connections originating from only two nodes in different drawers are shown to keep the diagramsimple). The right figure shows a second-level all to all connections across supernodes (again Dlinks originating from only two supernodes are shown).

On the right side of Figure 1, the second tier connections between supernodes are shown. Everysupernode is connected to every other supernode by a D link (10 GB/s). These inter-supernodeconnections originate and terminate at hub/switches connected to nodes; a given hub/switch isdirectly connected to only a fraction (≤ 16) of the other supernodes. For simplicity, D linksoriginating from only two supernodes (in red) have been shown. 32 cores of a node can inject onto the network at a bandwidth of 192 GB/s through a hub/switch directly connected to them.

Section 4 will present a case study of a 2D Stencil showing that a default mapping of thisapplication with direct routing can lead to significant congestion on the network. Hence, interestingresearch questions arise with respect to reducing hot-spots on the Blue Waters network. Randomversus contiguous job scheduling, direct versus indirect routing and intelligent mapping techniquespresent opportunities to minimize congestion.

Figure 2: The number of D links reduces significantly compared to that of LL and LR links as oneuses fewer and fewer supernodes in the PERCS topology.

An important thing to note about PERCS topology is the ratio of first level connections to the

DEF BNM BDM BSM RNM RDM DFI RDI

481.70 481.74 480.07 480.90 480.71 481.03 480.07 479.74

Table 4: Execution time per iteration (in ms) for 2D Stencil for different mappings on 64 supernodes

Mapping Node Drawer Supernode

DEF 16× 2× 1× 1 16× 16× 1× 1 16× 16× 4× 1BNM 4× 2× 2× 2 16× 4× 2× 2 16× 16× 2× 2BDM 4× 2× 2× 2 4× 4× 4× 4 16× 4× 4× 4BSM 4× 2× 2× 2 4× 4× 4× 4 8× 8× 4× 4

Table 5: Dimensions of blocks at different levels (node, drawer and supernode) for different mappings of 4D Stencil

479 ms).

The improvements in execution time for 2D Stencil are not sig-

nificant because the message size is small (64 KB) and hence there

is negligible load on the high bandwidth links. We shall see that

mapping can result in significant improvements when communica-

tion is higher, in the next few sections.

6.2 Mapping a 9-point 4D StencilA 9-point four-dimensional stencil is representative of the com-

munication pattern in MILC, a Lattice QCD code. For the same

amount of data assigned to each task in a 2D Stencil and 4D sten-

cil, say x4, the computation is 5x4

and 9x4respectively, but the

size of each message is x2in 2D and x3

in 4D. Hence, we expect

more congestion and better improvement from mapping for 4D.

For 4D Stencil simulations, we consider an array of dimensions

1024×1024×1024×1024 with each element being a double. The

4D array is distributed among MPI tasks by recursively dividing

along all four dimensions, with each task being assigned 64×64×64 × 64 elements. This leads to a logical 4D grid of MPI tasks

of dimensions 16 × 16 × 16 × 16. In each iteration, every MPI

tasks sends eight messages of size 64 × 64 × 64 elements to its

eight neighbors. Table 5 lists the dimensions of the blocks of tasks

placed on a node, drawer and supernode for different mappings.

For the random node mapping we place 4 × 2 × 2 × 2 tasks on a

node and for the random drawer mapping, we place 4× 4× 4× 4tasks on a drawer.

Figure 6 shows histograms based on the amount of data (in bytes)

sent over the LL, LR and D links (note, that the bin sizes and y-

axis ranges for the LL, LR and D links are different). The counts

only include links with a non-zero number of bytes passing through

them. The amount of data being sent over D links is much higher

(bin size of 108.3 MB) and hence, we expect that lowering the

amount of data being sent on D links will have a positive impact

on the performance. Lets us focus on the right column first which

shows the D link usage for different mappings. For the default map-

ping, a large number of links are in the last bin i.e. they are heavily

utilized. As we progressively block tasks using different mappings

(BNM, BDM and BSM), links start falling in lower numbered bins

signifying fewer bytes passing through them and lesser contention.

Random nodes mapping (RNM) is successful at spreading the load

evenly on more D links and also keeping the number of bytes pass-

ing through each bin bounded. Even though the random nodes and

drawer mappings increase the usage of LL and LR links, since the

data being sent over them is small, this does not have an adverse

affect on performance.

Figure 7 presents similar histograms for indirect routing coupled

with default mapping and random drawers mapping. These present

Figure 8: Average number of bytes sent over LL, LR and Dlinks for 4D Stencil on 64 supernodes

the best scenarios for link usage – for the D link histograms, more

Figure 9: Time spent in communication and overall execution per iteration or different mappings

on 64 supernodes

7.3 Multicast pattern

NAMD is a molecular dynamics application with a multicast communication pattern where some

processors build spanning trees and send messages along the trees to several processors. We wrote

a simple MPI benchmark to simulate this multicast pattern, where, in each iteration, every MPI

task communicates with 14 neighbors whose ranks differ from its own by ...,−2x,−x, x, 2x, 3x, ...where x can be varied. For example, for x = 5, MPI task with rank 50 communicates with ranks

20, 25, 30, 35, 40, 45, 55, 60, 65, 70, 75, 80, 85. This benchmark performs no computation. We

compare the default mapping of MPI tasks with four three mapping and routing configurations –

BNM, BDM, DFI and RDI.

In Figure 11, we present link usage statistics for the three types of links. This is a differentcommunication pattern from the 2D and 4D near-neighbor patterns we have seen so far. A ran-

dom nodes and random drawers mapping with direct routing does not get better link utilization

compared to the default mapping because it is difficult to find a blocking that is optimized for

this multicast pattern. However, the indirect routing cases (DFI and RDI) succeed in lowering the

average and maximum usage on the D links significantly compared to the other mappings. This is

also reflected in the reduction in per iteration execution time as shown in Table 6.

DEF BNM BDM DFI RDI

54.64 87.73 44.24 17.81 17.64

Table 6: Execution time per iteration (in ms) for the multicast pattern for different mappings on

64 supernodes

8 Full scale simulations for Blue Waters

The Blue Waters machine, when it is installed at Illinois, will consist of more than 300 supernodes

(the actual number is not public yet). With more than 307,200 cores, the machine will deliver

sustained Petaflop/s performance. In this section, we present results of running a 4D Stencil on

the 307,200 cores of Blue Waters using a detailed network simulation.

For 300 supernode simulations, we consider a data array of dimensions 512× 512× 1024× 4800

22

Page 23: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

CharmLU Exclusive Scheduling Classes

Time

Trailing Update Active Panel

Proc 1

Proc 2

Proc n

...

Proc 1

Proc 2

Proc n

...

Balancing Synchrony, Adaptivity, and asynchrony

23

Page 24: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

5

5.5

6

6.5

7

7.5

132 528 2112

GF

lop

s/co

re

Number of processors

Active panel and reduction callback isolatedActive panel isolated

Active panel and U triangular solves isolatedNo isolation

CharmLU Exclusive Scheduling Classes

24

Page 25: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Fault Tolerance: Causal Message Logging

• Consistently better than pessimistic message logging.

• Scaling results up to 1024 cores.

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

CG(D) MG(E) BT(D) DT(C)

Rel

ativ

e E

xec

uti

on T

ime

NAS Parallel Benchmark

Checkpoint/RestartCausal Message LoggingPessimistic Message Logging

25

Page 26: State of Charm++charm.cs.uiuc.edu/workshops/charmWorkshop2011/slides/...Migratable objects User view System view • Programmer: Overdecomposition into virtual processors • Runtime:

April 18th, 2011 Charm++ Workshop 2011 © Laxmikant V. Kale

Team-based Load Balancer

• Attains two goals: balance the load and reduce message logging memory overhead.

• Low performance penalty, dramatic memory savings.

0

40

80

120

160

200

240

280

320

360

Execution Time Memory Overhead 0

12

24

36

48

60

72

84

96

108

Tim

e (

seconds)

Mem

ory

(M

B)

NoLB(8)GreedyLB(8)

TeamLB(1)TeamLB(8)

26