Performance Comparison of Pure Performance Comparison of Pure MPI vs Hybrid MPI-OpenMP MPI vs Hybrid MPI-OpenMP Parallelization Models on SMP Parallelization Models on SMP Clusters Clusters Nikolaos Drosinos and Nectarios Koziris National Technical University of Athens Computing Systems Laboratory
34
Embed
Performance Comparison of Pure MPI vs Hybrid MPI-OpenMP Parallelization Models on SMP Clusters
Performance Comparison of Pure MPI vs Hybrid MPI-OpenMP Parallelization Models on SMP Clusters. Nikolaos Drosinos and Nectarios Koziris National Technical University of Athens Computing Systems Laboratory {ndros,nkoziris}@cslab.ece.ntua.gr www.cslab.ece.ntua.gr. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Performance Comparison of Performance Comparison of Pure MPI vs Hybrid MPI-Pure MPI vs Hybrid MPI-
Pure Message-passing ModelPure Message-passing Model
April 27, 2004 IPDPS 2004 10
OverviewOverview
Introduction Pure Message-passing Model Hybrid Models
• Hyperplane Scheduling• Fine-grain Model• Coarse-grain Model
Experimental Results Conclusions – Future Work
April 27, 2004 IPDPS 2004 11
Hyperplane SchedulingHyperplane Scheduling
Implements coarse-grain parallelism assuming inter-tile data dependencies Tiles are organized into data-independent subsets (groups) Tiles of the same group can be concurrently executed by multiple threads Barrier synchronization between threads
Introduction Pure Message-passing Model Hybrid Models
• Hyperplane Scheduling• Fine-grain Model• Coarse-grain Model
Experimental Results Conclusions – Future Work
April 27, 2004 IPDPS 2004 15
Fine-grain ModelFine-grain Model
Incremental parallelization of computationally intensive parts Pure MPI + hyperplane scheduling Inter-node communication outside of multi-threaded part (MPI_THREAD_MASTERONLY) Thread synchronization through implicit barrier of omp parallel directive
Introduction Pure Message-passing Model Hybrid Models
• Hyperplane Scheduling• Fine-grain Model• Coarse-grain Model
Experimental Results Conclusions – Future Work
April 27, 2004 IPDPS 2004 18
Coarse-grain ModelCoarse-grain Model
Threads are only initialized once SPMD paradigm (requires more programming effort)Inter-node communication inside multi-threaded part (requires MPI_THREAD_FUNNELED) Thread synchronization through explicit barrier (omp barrier directive)
: node 0, CPU 0: node 0, CPU 1: node 1, CPU 0: node 1, CPU 1
X<Y X>Y
April 27, 2004 IPDPS 2004 24
ADI X=128 Y=512 Z=8192 – 2 ADI X=128 Y=512 Z=8192 – 2 nodesnodes
April 27, 2004 IPDPS 2004 25
ADI X=256 Y=512 Z=8192 – 2 ADI X=256 Y=512 Z=8192 – 2 nodesnodes
April 27, 2004 IPDPS 2004 26
ADI X=512 Y=512 Z=8192 – 2 ADI X=512 Y=512 Z=8192 – 2 nodesnodes
April 27, 2004 IPDPS 2004 27
ADI X=512 Y=256 Z=8192 – 2 ADI X=512 Y=256 Z=8192 – 2 nodesnodes
April 27, 2004 IPDPS 2004 28
ADI X=512 Y=128 Z=8192 – 2 ADI X=512 Y=128 Z=8192 – 2 nodesnodes
April 27, 2004 IPDPS 2004 29
ADI X=128 Y=512 Z=8192 – 2 ADI X=128 Y=512 Z=8192 – 2 nodesnodes
Computation Communication
April 27, 2004 IPDPS 2004 30
ADI X=512 Y=128 Z=8192 – 2 ADI X=512 Y=128 Z=8192 – 2 nodesnodes
Computation Communication
April 27, 2004 IPDPS 2004 31
OverviewOverview
Introduction Pure Message-passing Model Hybrid Models
• Hyperplane Scheduling• Fine-grain Model• Coarse-grain Model
Experimental Results Conclusions – Future Work
April 27, 2004 IPDPS 2004 32
ConclusionsConclusions
Tiled loop algorithms with arbitrary data dependencies can be adapted to the hybrid parallel programming paradigm Hybrid models can be competitive to the pure message-passing paradigm Coarse-grain hybrid model can be more efficient than fine-grain one, but also more complicated Programming efficiently in OpenMP not easier than programming efficiently in MPI
April 27, 2004 IPDPS 2004 33
Future WorkFuture Work
Application of methodology to real applications and standard benchmarks Work balancing for coarse-grain model Investigation of alternative topologies, irregular communication patterns Performance evaluation on advanced interconnection networks (SCI, Myrinet)