Lecture 1 Lecture 1 Introduction to MPI Introduction to MPI Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied Sciences [email protected]Some slides have bee adapted with thanks from some other lectures available on Internet
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Lecture 1Lecture 1
Introduction to MPI Introduction to MPI Dr. Muhammad Hanif Durad
Department of Computer and Information Sciences
Pakistan Institute Engineering and Applied Sciences
Some slides have bee adapted with thanks from some other lectures available on Internet
Dr. Hanif Durad 2
Lecture Outline
Models for Communication for PP MPI Libraries Features of MPI Programming With MPI Using MPI Manual Compilation and running a program Basic concepts Homework
IntroMPI.ppt
Models for Communication for PP
Parallel program = program composed of tasks (processes) which communicate to accomplish an overall computational goal
Two prevalent models for communication: Shared memory (SM) Message passing (MP)
D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt
Shared Memory Communication Processes in shared memory program communicate by
accessing shared variables and data structures Basic shared memory primitives
Read to a shared variable Write to a shared variable
4
InterconnectionMedia…Processors memories
Basic Shared Memory Multiprocessor Architecture
…
D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt
Accessing Shared Variables
Dr. Hanif Durad 5
Conflicts may arise if multiple processes want to write to a shared variable at the same time.
Programmer, language, and/or architecture must provide means of resolving conflicts
Shared variable x
+1proc. A
+1proc. B
Process A,B:read xcompute x+1write x
D:\PARALLEL COMP\CSE 160 FB\lecture2.ppt
Message Passing Communication
Dr. Hanif Durad 6
• Processes in message passing program communicate by passing messages
Notes on Hello World All MPI programs begin with MPI_Init and end with MPI_Finalize MPI_COMM_WORLD is defined by mpi.h (in C) or mpif.h (in Fortran) and
designates all processes in the MPI “job” Each statement executes independently in each process
including the printf/print statements I/O not part of MPI-1but is in MPI-2
print and write to standard output or error not part of either MPI-1 or MPI-2 output order is undefined (may be interleaved by character, line, or blocks of
characters), The MPI-1 Standard does not specify how to run an MPI program, but many
implementations provide mpirun –np 4 a.out
Dr. Hanif Durad 20
IntroMPI.ppt
What you have learnt?
Dr. Hanif Durad 21
Some Basic Concepts
Processes can be collected into groups Each message is sent in a context, and must be received
in the same context Provides necessary support for libraries
A group and context together form a communicator A process is identified by its rank in the group associated
with a communicator There is a default communicator whose group contains all
initial processes, called MPI_COMM_WORLDDr. Hanif Durad 22
General MPI Program Structure
Dr. Hanif Durad 23
https://computing.llnl.gov/tutorials/mpi/
Home Work
Modify the previous program so the processor name executing a process is also printed out.