MPI – An introduction by Jeroen van Hunen • What is MPI and why should we use it? • Simple example + some basic MPI functions • Other frequently used MPI functions • Compiling and running code with MPI • Domain decomposition • Stokes solver • Tracers/markers • Performance • Documentation
17
Embed
MPI – An introduction by Jeroen van Hunen What is MPI and why should we use it? Simple example + some basic MPI functions Other frequently used MPI functions.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
MPI – An introduction by Jeroen van Hunen
• What is MPI and why should we use it?
• Simple example + some basic MPI functions
• Other frequently used MPI functions
• Compiling and running code with MPI
• Domain decomposition
• Stokes solver
• Tracers/markers
• Performance
• Documentation
What is MPI?
• Mainly a data communication tool: “Message-Passing Interface”• Allows parallel calculation on distributed memory machines• Usually Single-Program-Multiple-Data principle used: all processors have similar tasks (e.g. in domain decomposition)• Alternative: OpenMP for shared memory machines
Why should we use MPI?
• If sequential calculations take too long• If sequential calculations use too much memory
Output for 4 processors:
Code:
contains definitions, macros, function prototypes
initialize MPIask processor ‘rank’ ask # processors p
stop MPI
Simple MPI example
MPI calls for sending/receiving data
in C:
in Fortran:
in C:
in Fortran:
MPI_SEND and MPI_RECV syntax
MPI data types
in C: in Fortran:
Other frequently used MPI calls
Sending and receiving at the same time: no risk for deadlocks:
… or overwrite send buffer with received info:
Other frequently used MPI calls
Synchronizing the processors: wait for each other at the barrier:
Broadcasting a message from one processor to all the others: both sending and receiving processors use same call to MPI_BCAST
Other frequently used MPI calls
“Reducing” (combining) data from all processors: add, find maximum/minimum, etc.
OP can be one of the following:
For results to be available at all processors, use MPI_Allreduce:
Additional comments:
• ‘wildcards’ are allowed in MPI calls for:• source: MPI_ANY_SOURCE• tag: MPI_ANY_TAG
•MPI_SEND and MPI_RECV are ‘blocking’: they wait until job is done
Deadlocks:•Deadlock
•Depending on buffer
•Safe
•Don’t let processor send a message to itself•In this case use MPI_SENDRECV