<Supercomputing Korea 2006> AEROSPACE STRUCTURES Laboratory, Seoul National University 1/27 Parallel Implementation of Impact Simulation Seung Hoon Paik*, Ji-Joong Moon* and Seung Jo Kim** * School of Mechanical and Aerospace Engineering, Seoul National University, Seoul, Korea ** School of Mechanical and Aerospace Engineering, Flight Vehicle Research Center, Seoul National University, Seoul, Korea Supercomputing Korea 2006 November 21, 2006
25
Embed
AEROSPACE STRUCTURES Laboratory, Seoul National University 1/27 Parallel Implementation of Impact Simulation Seung Hoon Paik*, Ji-Joong Moon* and Seung.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
<Supercomputing Korea 2006>
AEROSPACE STRUCTURES Laboratory, Seoul National University
1/27
Parallel Implementation of Impact Simulation
Seung Hoon Paik*, Ji-Joong Moon* and Seung Jo Kim**
* School of Mechanical and Aerospace Engineering, Seoul National University, Seoul, Korea** School of Mechanical and Aerospace Engineering, Flight Vehicle Research Center,
Seoul National University, Seoul, Korea
Supercomputing Korea 2006
November 21, 2006
<Supercomputing Korea 2006>
AEROSPACE STRUCTURES Laboratory, Seoul National University
AEROSPACE STRUCTURES Laboratory, Seoul National University
<Supercomputing Korea 2006>
FE Calculation Parallelization
1. Compute at each processor
independently.
2. Interface values are swapped and
added.1 2 3
4 5 6
7 8 9
Each Processor (or domain) knows
(1) list of processors that share common interface
(2) list of nodes in each shared interface.
At the initialization stage and are not changed
through the computation (Static)
Array structure of send buffer
Array structure of Receive buffer
9/27
<Supercomputing Korea 2006>
AEROSPACE STRUCTURES Laboratory, Seoul National University
Contact Parallelization
Contact Parallelization (Computers and Structures, 2006)
Contact segment 를 FE decomposition 과 동일하게 분할하되 ,
Segment 가 공간상 차지하는 확장영역에 들어오는 Contact node 의 data 를 Unstructured Communication 을 적용하여 송 / 수신
Two-body contact 및 Single Surface contact 에 모두 적용 가능하도록 일반화 유한요소 내력벡터와 마스터 노드의 접촉력벡터 동시 통신 송 / 수신 자료 구조의 일관성 유지 , 비구조화 통신 하에서도 송신데이터의 최소화 Maker 나 특별히 최적화 된 OS 가 아닌 일반 linux cluster 에서 대규모 병렬성능 테스트 결과
제시Contact Load Balancing
10/27
AEROSPACE STRUCTURES Laboratory, Seoul National University
<Supercomputing Korea 2006>
Verification : Taylor Bar Impact Test
Analysis Conditions Material Model : Elastic-Plastic with Linear Hardening Termination Time : 80 micro sec Constraints : Sliding condition in bottom surface
Results
Density 8.93
Yong’s Modulus(Gpa) 117
Poison Ratio 0.35
Initial Yield Stress(Gpa) 0.4
Hardening Modules(Gpa) 0.1
Initial Velocity(km/sec) 0.227
Initial Length(mm) 32.4
Initial Radius(mm) 3.2
Initial & Deformed Configuration
Material Constants & Geometric Configuration
Number of Node : 1369Number of Element : 972
CodesLength(mm)
Radius(mm)
ABAQUS/Explicit 21.48 7.08
LS-DYNA 21.23 6.18
IPSAP/Explicit 21.52 7.00
11/27
AEROSPACE STRUCTURES Laboratory, Seoul National University
<Supercomputing Korea 2006>
Verification : Oblique Impact of Metal Sphere
Comparison with Experiment(Finnegan SA, Dimaranan LG, Heimdahl OER - 1993)
Model Configuration
IPSAP/Explicit Impact angle = 60°
(a) 610m/s (b) 910 m/s
Material Model Johnson-Cook
Diameter (mm) 6.35
Mass (g) 1.04
Plate Length /width(mm) 50/40
Erosion EPS 2.0
(a) 610m/s (b) 910 m/s
Comparison : Experiment vs. IPSAP/Explicit
12/27
AEROSPACE STRUCTURES Laboratory, Seoul National University
Step 4. Advection of vertex centered variables Calculate momentum and mass at the vertex Communicate vertex values Momentum in I, J, K direction, mass Calculate velocity at vertex
IPSAP/Explicit shows 2 or 3 times of smaller elapsed time than LS-DYNA
LS-DYNA : use HIS algorithm, MM
22/27
AEROSPACE STRUCTURES Laboratory, Seoul National University
<Supercomputing Korea 2006>
Parallel Performance Evaluation
NCPU
Lagrangian Part Remap PartTime
IntegrationTotal
(elapsed)Internal FComm.
Internal FRemap
Comm. Remap
13.230e+0
20.0
4.062e+03
0.0 3.257e+024.711e+0
3
21.957e+0
26.773e-01
3.034e+03
4.408e+01 3.482e+023.587e+0
3
41.056e+0
21.191e+00
1.502e+03
5.873e+01 1.842e+021.803e+0
3
86.301e+0
11.381e+00
6.829e+02
6.038e+01 1.046e+028.591e+0
2
163.899e+0
11.505e+00
3.555e+02
5.639e+01 6.226e+014.630e+0
2
322.625e+0
11.642e+00
2.172e+02
5.762e+01 4.103e+012.902e+0
2
641.910e+0
11.820e+00
1.282e+02
5.736e+01 3.010e+011.840e+0
2
Elapsed Time of Each Sub Function
NCPUInternal Force
Remap Total
1 1.00 1.00 1.00
2 1.65 1.34 1.30
4 3.06 2.70 2.58
8 5.13 5.95 5.42
16 8.28 11.4 10.1
32 12.3 18.7 16.1
64 16.9 31.7 25.3
Speed-Up of Internal force calculation and Remap
•elapsed time for remap part including the communication time takes about 90 % of total elapsed time•communication time for remap part is 30~40 times larger than that for the Lagrangian part.
parallel efficiency for remap part is better than that of internal force. This is because the calculation of internal force of the void cell is skipped in the program
23/27
<Supercomputing Korea 2006>
AEROSPACE STRUCTURES Laboratory, Seoul National University
Summary & Future Work
Summary A newly developed Lagrangian/Eulerian code has been described and its
parallel procedure has been provided. Parallel performance is compared with a commercial code and is shown to
be very efficient as the number of CPUs increases. The remap part is identified to be the most influent part to the serial and
parallel performance since it takes over 90% of total elapsed time. The first parallel Two-step Eulerian Code developed in Korea
Future Work Multi-material capability 2nd order accuracy Lagrangian-Eulerian Interface
24/27
AEROSPACE STRUCTURES Laboratory, Seoul National University