Auto‐Tuning of Hierarchical Computations with ppOpen‐AT Takahiro Katagiri i),ii),iii) , Masaharu Matsumoto i),ii) , Satoshi Ohshima i),ii) 1 17th SIAM Conference on Parallel Processing for Scientific Computing (SIAM PP16) Universite Pierre et Marie Curie, Cordeliers Campus, Paris, France Thursday, April 14, 2016, MS55:Auto‐Tuning for the Post Moore’s Era – Part I of II, 2:40‐3:00 i) Information Technology Center, The University of Tokyo ii) JST, CREST iii) Currently, Information Technology Center, Nagoya University
31
Embed
Auto‐Tuning of Hierarchical Computations with ppOpen‐AT
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Auto‐Tuning of Hierarchical Computations with ppOpen‐AT
17th SIAM Conference on Parallel Processing for Scientific Computing (SIAM PP16)Universite Pierre et Marie Curie, Cordeliers Campus, Paris, FranceThursday, April 14, 2016, MS55:Auto‐Tuning for the Post Moore’s Era – Part I of II, 2:40‐3:00
i) Information Technology Center, The University of Tokyo ii) JST, CREST
iii) Currently, Information Technology Center, Nagoya University
Outline1. Background2. ppOpen‐HPC and ppOpen‐AT3. Code Selection in Seism3D and Its Implementation by ppOpen‐AT
4. Performance Evaluation5. Conclusion
Outline1. Background2. ppOpen‐HPC and ppOpen‐AT3. Code Selection in Seism3D and Its Implementation by ppOpen‐AT
• It is expected that Moore’s Low is broken around end of 2020.
– End of “One‐time Speedup”: Many Cores, Wiring miniaturization to reduce power.
→ It cannot increase FLOPS inside node.
• However, memory bandwidth inside memory can increase by using “3D Stacking Memory”Technologies.
• 3D Stacking Memory: – It can increase bandwidth for Z‐Direction
(Stacking distance) , and keeping access latency (→ High performance)
– It can be low access latency for X‐Y directions.
• Access latency between nodes goes down, but bandwidth can be increased by optical interconnection technology.
• We need to take care of new algorithms with respect to ability of data movements.
HMC
Intel 3D Xpoint
AT Technologies in Post Moore’s Era
Outline1. Background2. ppOpen‐HPC and ppOpen‐AT3. Code Selection in Seism3D and Its Implementation by ppOpen‐AT
4. Performance Evaluation5. Conclusion
ppOpen‐HPC Project• Middleware for HPC and Its AT Technology
– Supported by JST, CREST, from FY2011 to FY2015.– PI: Professor Kengo Nakajima (U. Tokyo)
• ppOpen‐HPC – An open source infrastructure for reliable simulation codes on post‐peta (pp) scale parallel computers.
– Consists of various types of libraries, which covers 5 kinds of discretization methods for scientific computations.
• ppOpen‐AT – An auto‐tuning language for ppOpen‐HPC codes – Using knowledge of previous project: ABCLibScript Project 4). – Auto‐tuning language based on directives for AT.
6
Software Architecture of ppOpen‐HPC
7
FVM DEMFDMFEM
Many‐core CPUs GPUsMulti‐core
CPUs
MG
COMM
Auto‐Tuning FacilityCode Generation for Optimization CandidatesSearch for the best candidateAutomatic Execution for the optimization
ppOpen‐APPL
ppOpen‐MATH
BEM
ppOpen‐AT
User Program
GRAPH VIS MP
STATIC DYNAMIC
ppOpen‐SYS FT
Optimize memory accesses
ppOpen‐AT System (Based on FIBER 2),3),4),5) ) ppOpen‐APPL /*
ppOpen‐ATDirectives
User KnowledgeLibrary
Developer
① Before Release‐time
Candidate1
Candidate2
Candidate3
CandidatenppOpen‐AT
Auto‐Tuner
ppOpen‐APPL / *
AutomaticCodeGeneration②
:Target Computers
Execution Time④
Library User
③
Library Call
Selection
⑤
⑥
Auto‐tunedKernelExecution
Run‐time
This user benefited from AT.
Scenario of AT for ppOpen‐APPL/FDM
9Execution with optimized
kernels without AT process.
Library User
Set AT parameters, and execute the library
(OAT_AT_EXEC=1)
■Execute auto-tuner: With fixed loop lengths (by specifying problem size and number of MPI processes)Time measurement for target kernelsStore the best variant information.
Set AT parameters, and execute the library(OAT_AT_EXEC=0)
Store the fastest kernelinformation
Using the fastest kernel without AT (except for varying problem size, number of MPI processes and OpenMP threads.)
Explicit time expansion by leap‐frog scheme. (update_stress)
Input and output for arrays Input and output for arrays in each call ‐> Increase of
B/F ratio: ~1.7
Code selection by ppOpen‐AT and hierarchical AT
Program main….!OAT$ install select region start!OAT$ name ppohFDMupdate_vel_select!OAT$ select sub region startcall ppohFDM_pdiffx3_p4( SXX,DXSXX,NXP,NYP,NZP,….)call ppohFDM_pdiffy3_p4( SYY,DYSYY, NXP,NYP,NZP,…..)…if( is_fs .or. is_nearfs ) thencall ppohFDM_bc_stress_deriv( KFSZ,NIFS,NJFS,IFSX,….)end ifcall ppohFDM_update_vel ( 1, NXP, 1, NYP, 1, NZP )!OAT$ select sub region end
!OAT$ select sub region startCall ppohFDM_update_vel_Intel ( 1, NXP, 1, NYP, 1, NZP )!OAT$ select sub region end
!OAT$ install select region end
Upper Code With Select clause, code selection can be
specified.
subroutine ppohFDM_update_vel(….)….!OAT$ install LoopFusion region start!OAT$ name ppohFDMupdate_vel!OAT$ debug (pp)!$omp parallel do private(i,j,k,ROX,ROY,ROZ)do k = NZ00, NZ01do j = NY00, NY01do i = NX00, NX01
…..….
Lower Codesubroutine ppohFDM_pdiffx3_p4(….)….!OAT$ install LoopFusion region start….
Intel MPI Based on MPICH2, MVAPICH2 Version 5.0 Update 3 Build 20150128
Compiler:Intel Fortran version 15.0.3 20150407 Compiler Options:
‐ipo20 ‐O3 ‐warn all ‐openmp ‐mcmodel=medium ‐shared‐intel –mmic‐align array64byte
KMP_AFFINITY=granularity=fine, balanced (Uniform Distribution of threads between sockets)
Execution Details• ppOpen‐APPL/FDM ver.0.2• ppOpen‐AT ver.0.2• The number of time step: 2000 steps• The number of nodes: 8 node• Native Mode Execution• Target Problem Size (Almost maximum size with 8 GB/node)– NX * NY * NZ = 512 x 512 x 512 / 8 Node– NX * NY * NZ = 256 * 256 * 256 / node(!= per MPI Process)
• The number of iterations for kernels to do auto‐tuning: 100
Execution Details of Hybrid MPI/OpenMP
• Target MPI Processes and OMP Threads on the Xeon Phi– The Xeon Phi with 4 HT (Hyper Threading) – PX TY: XMPI Processes and Y Threads per process– P8T240 : Minimum Hybrid MPI/OpenMP execution for ppOpen‐APPL/FDM, since it needs minimum 8 MPI Processes.
– Less than P960T2 cause an MPI error in this environment.
#0
#1
#2
#3
#4
#5
#6
#7
#8
#9
#10
#11
#12
#13
#14
#15
P2T8
#0
#1
#2
#3
#4
#5
#6
#7
#8
#9
#10
#11
#12
#13
#14
#15
P4T4 Target of cores for one MPI Process
BREAK DOWN OF TIMINGS
THE BEST IMPLEMENTATION
Outline1. Background2. ppOpen‐HPC and ppOpen‐AT3. Code Selection in Seism3D and Its Implementation by ppOpen‐AT
4. Performance Evaluation5. Conclusion
RELATED WORK
Originality (AT Languages)AT Language
/ Items#1
#2
#3
#4
#5
#6
#7
#8
ppOpen‐AT OAT Directives ✔ ✔ ✔ ✔ ✔ NoneVendor Compilers Out of Target Limited ‐Transformation
Recipes Recipe
Descriptions✔ ✔ ChiLL
POET Xform Description ✔ ✔ POET translator, ROSE
X language Xlang Pragmas ✔ ✔ X Translation,‘C and tcc
SPL SPL Expressions ✔ ✔ ✔ A Script Language
ADAPT
ADAPT Language
✔ ✔ Polaris Compiler Infrastructure,
Remote Procedure Call (RPC)
Atune‐IL Atune Pragmas ✔ A Monitoring Daemon
PEPPHER PEPPHER Pragmas (interface)
✔ ✔ ✔ PEPPHER task graph and run-time
Xevolver Directive Extension(Recipe Descriptions)
(✔) (✔) (✔) ROSE,XSLT Translator
#1: Method for supporting multi-computer environments. #2: Obtaining loop length in run-time. #3: Loop split with increase of computations6) ,and loop collapses to the split loops6),7),8) . #4: Re-ordering of inner-loop sentences8) . #5:Code selection with loop transformations (Hierarchical AT descriptions*) *This is originality in current researches of AT as of 2015. #6:Algorithm selection. #7: Code generation with execution feedback. #8: Software requirement.
(Users need to define rules. )
Conclusion Remarks Propose an IF‐free kernel: An effective kernel implementation of an application with FDM by merging computations of central‐difference and explicit time expansion schemes.
Use AT language to adapt code selection for new kernels: The effectiveness of the new implementation depends on the CPU architecture and execution situation, such as problem size and the number of MPI processes and OpenMP threads.
To obtain free code (MIT Licensing):http://ppopenhpc.cc.u‐tokyo.ac.jp/
Future Work• Improving Search Algorithm
– We use a brute‐force search in current implementation.• This is feasible by applying knowledge of application.
– We have implemented a new search algorithm based on black box performance models.
• d‐Spline Model (interpolation and incremental additionbased) ,collaborated with Prof. Tanaka (Kogakuin U.)
• Surrogate Model (interpolation and probability based) collaborated with Prof. Wang (National Taiwan U.)
• Off‐loading Implementation Selection (for the Xeon Phi)– If problem size is too small to do off‐loading, the target execution is
performed on CPU automatically.
• Adaptation of OpenACC for GPU computing– Selection of OpenACC directives with ppOpenAT by Dr. Ohshima.