Lecture 4 Lecture 4 Collective Collective Communications Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied Sciences [email protected]Some slides have bee adapted with thanks from some other lectures available on Internet
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Some slides have bee adapted with thanks from some other lectures available on Internet
Dr. Hanif Durad 2
Lecture Outline
Collective Communication First Program using Collective communication The Master- Slave Paradigm Multiplying a matrix with a vector
IntroMPI.ppt
Another Approach to Parallelism
Collective routines provide a higher-level way to organize a parallel program
Each process executes the same communication operations
MPI provides a rich set of collective operations…
Dr. Hanif Durad 3
IntroMPI.ppt
Collective Communication Involve all processes in the scope of communicator Three categories
synchronization (barrier()) data movement (broadcast, scatter, gather, alltoall) collective computation (reduce(), scan())
Limitations/differences from point-to-point blocking (no more true with MPI 2) does not take tag arguments works with MPI defined datatypes - not with derived types
Dr. Hanif Durad 4
Comm.ppt
Collective Communication Involves set of processes, defined by an intra-communicator. Message
tags not present. Principal collective operations:
MPI_Bcast() - Broadcast from root to all other processes MPI_Gather() - Gather values for group of processes MPI_Scatter() - Scatters buffer in parts to group of processes MPI_Alltoall() - Sends data from all processes to all processes MPI_Reduce()- Combine values on all processes to single value MPI_Reduce_scatter() - Combine values and scatter results MPI_Scan() - Compute prefix reductions of data on processes