Adaptive Multiscale Simulation Infrastructure - AMSI Overview: o Industry Standards o AMSI Goals and Overview o AMSI Implementation o Supported Soft Tissue Simulation o Results W.R. Tobin, D. Fovargue, D. Ibanez, M.S. Shephard Scientific Computation Research Center Rensselaer Polytechnic Institute
30
Embed
Adaptive Multiscale Simulation Infrastructure - AMSI Overview: o Industry Standards o AMSI Goals and Overview o AMSI Implementation o Supported Soft.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Overview:o Industry Standardso AMSI Goals and Overviewo AMSI Implementationo Supported Soft Tissue Simulationo Results
W.R. Tobin, D. Fovargue, D. Ibanez, M.S. ShephardScientific Computation Research Center
Rensselaer Polytechnic Institute
2
Current Industry Standards – Physical Simulations
Overwhelming majority of numerical simulations conducted in HPC (and elsewhere) are single scaleo Continuum (e.g. Finite Element, Finite Difference)o Discrete (e.g. Molecular Dynamics)
Phenomena at multiple scales can have profound effects on the eventual solution to a problem (e.g. fine-scale anisotropies)
3
Current Industry Standards – Physical Simulations
Typically a physical model or scale is simulated using a Single Program Multiple Data (SPMD) style of parallelismo Quantities of interest (mesh, tensor fields, etc.) distributed
across the parallel execution space
Geometric Model Partition Model Distributed Mesh
4
Current Industry Standards – Physical Simulations
Interacting physical models and scales introduce a much more complex set of requirements in our use of the parallel execution spaceo Writing a new SPMD code for each new multiscale simulation
would require intense reworking of legacy codes used for single-scale simulations (possibly many times over)
Need approach which can leverage the work that has gone into creating and perfecting legacy simulations in the context of massively parallel simulations with interacting physical models
primary spmd code
auxiliaryspmd code
auxiliaryspmd code
auxiliaryspmd code
5
AMSI Goals
Take advantage of proven legacy codes to address the needs of multimodel problemso Minimize need to rework legacy codes to execute in more
dynamic parallel environment o Only desired edit/interaction points are those locations in the
code where the values produced by multiscale interactions are needed.
Allow dynamic scale load-balancing and process scale reassignment to reduce process idle time when a scale is blocked or underutilized
6
AMSI Goals
Hierarchy of focuseso Abstract-Level: Support for
implementing multi-model simulations on massively parallel HPC machines
o Simulation-Level: Allow dynamic runtime workflow management to implement versatile adaptive simulations
o Theory-Level: Provide generic control algorithms (and hooks to allow specialization) supported by real-time minimal simulation meta-modeling
o Developer-Level: Facilitate all of the above while minimizing AMSI system overheads and maintaining robust code
simulation goals
physics analysis
scale/physicslinking models
physical attributes
simulation initialization
simulation
state control
adaptive simulation control
discretization,
model, linking error
estimates
discretization, model, scale linking improvement
model hierarch
y control
limits based on measured paramete
rs
7
AMSI Goals
Variety of end-users targetedo Application Experts:
• Simulation end-users who want answers to various problems
o Modeling Experts:• Introduce codes expressing new
physical models• Combine proven physical models in new
ways to describe multiscale behavior
o Computational Experts:• Introduce new discretization methods• Introduce new numerical solution
methods• Develop new parallel algorithms
8
AMSI Overview
General meta-modeling services o Support for modeling computational scale-linking operations
and data• Model of scale-tasks and task-relations denoting multiscale data transfer
o Specializing this support will facilitate interaction with high-level control and decision-making algorithms
explicit and computational
domains
math and computational
models
explicit andcomputational tensor fields
scaleX
explicit and computational
domains
math and computational
models
explicit andcomputational tensor fields
scaleY
geometric
interactions
model
relationships
field
transformations
scale linking
9
AMSI Overview
Dynamic management of the parallel execution space Process reassignment will use load balancing support for underlying SPMD distributed data as well as the implementation of state-specific entry/exit vectors for scale-tasks.o Load balancing support of scale-coupling data is supported by
the meta model of that data in the parallel spaceo Other data requires support for dynamic load balancing in any
underlying libraries o Can be thought of as a hierarchy of load-balancing operations
• Scale-tasks and their scale-linking relations o Runtime actions
• Data distributions representing discrete units of generic scale-linking data • Communication patterns determining distribution of scale linking
communication down to individual data distribution unitso Shift to more dynamic scale management will require new
control data to be reconciled across processes and scales• Change initialization actions to be (allowable) runtime actions
Initialization Runtime
scaleX
scaleY
scaleZscaleX
scale-linking data
scaleY
communicationpatterns
12
AMSI Implementation
Two forms of control data parallel communicationo Assembly is a scale-task collective process.o Reconciliation is collective on the union of two scale-tasks
associated by a communication relation.
scaleX
scaleY
Assembly
scaleX
scaleY
Reconciliation
13
AMSI Implementation
Scale linking communication patterns o Constructed via standard distribution algorithms, oro Hooks provided for user-implemented pattern construction,
Shift to phased communication and dynamic scale-task management will introduce new requirementso Will reduce number of explicit control data reconciliationso Will require the introduction of implicit control data
reconciliations during scale-linking operations• Primary simulation control points
scaleX
scaleY
assemble
reconcile communicate
16
AMSI Implementation
Shift to phased communication and dynamic scale-task management will introduce new requirementso Will reduce number of explicit control data reconciliationso Will require the introduction of implicit control data
reconciliations during scale-linking operations• Primary simulation control points
Intermediate scale between current scaleso Scale linking
• Deformations to RVE• Force/displacement to the engineering scale
Macroscale
Fib
er-O
nly
18
Biotissue Implementation
Scalable implementation with parallelized scale-tasks
Macroscale
Microscale
macro0 macro1 macro2 macroN
micro0 micro1 micro2 microM-1 microM
19
Biotissue Implementation
Scalable implementation with parallelized scale-tasks
Macroscale
Microscale
macro0 macro1 macro2 macroN
micro0 micro1 micro2 microM-1 microM
20
Biotissue Implementation
Scalable implementation with parallelized scale-taskso Ratio of macroscale mesh elements per macroscale process
to number of microscale processes determines neighborhood of scale-linking communication
Macroscale
Microscale
21
Biotissue Implementation
o Macroscale - Parallel Finite Element Analysis • Distributed partitioned mesh, distributed tensor fields defined over the
mesh, distributed linear algebraic system• Stress field values characterize macro-micro ratio
o Fiber-only microscale - Quasistatics code• ~1k Nodes per RVE• Rapid assembly and solve times per RVE in serial implementation • Strong scaling with respect to macroscale mesh size • Initial results use fiber-only at every macroscale integration point to
generate stress field valueso Fiber-matrix microscale – Parallel FEA
• Order of magnitude more nodes per RVE (~10k-40k)• More complex linear system assembly and longer solve times
(nonlinear) necessitate parallel implementation per RVE
22
Biotissue Implementation
Incorporating fiber-and-matrix microscale RVEso Hierarchy of parallelism