Towards Multiscale Computing Tools based on GridSpace Katarzyna Rycerz, Eryk Ciepiela, Daniel Harężlak, Marian Bubak ACC Cyfronet and Institute of Computer Science, AGH, Krakow, Poland dice.cyfronet.pl Work supported by MAPPER: Multiscale Applications on European e-Infrastructures, http://www.mapper- project.eu , „e-Infrastructures” Project Director: Alfons Hoekstra, Amsterdam University Cracow Grid Workshop 2010
12
Embed
Towards Multiscale Computing Tools based on GridSpace
Towards Multiscale Computing Tools based on GridSpace. Katarzyna Rycerz, Eryk Ciepiela, Daniel Harężlak, Marian Bubak ACC Cyfronet and Institute of Computer Science, AGH, Krakow, Poland dice.cyfronet.pl - PowerPoint PPT Presentation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Towards Multiscale Computing Tools based on GridSpace
Katarzyna Rycerz, Eryk Ciepiela, Daniel Harężlak, Marian Bubak
ACC Cyfronet and Institute of Computer Science, AGH, Krakow, Poland
dice.cyfronet.pl
Work supported by MAPPER: Multiscale Applications on European e-Infrastructures, http://www.mapper-project.eu , „e-Infrastructures”
Project Director: Alfons Hoekstra, Amsterdam University
Cracow Grid Workshop 2010
Overview
• Multiscale simulations - overview• MAPPER motivation and
architecture• GridSpace – short reminder from
yesterday• Preliminary experiment with
multiscale application in GridSpace• Demo of the experiment
Multiscale SimulationsConsists of modules of
different scale
Examples – e.g. modelling:virtual physiological
human initiative
reacting gas flows
capillary growth
colloidal dynamics
stellar systems
and many more ...
virtual physiological humanvirtual physiological human fusionfusion hydrologyhydrology
nano material sciencenano material science computational biologycomputational biology
the reoccurrence of stenosis, a narrowing of a
blood vessel, leading to restricted blood flow
MAPPER architectureDevelop computational
strategies, software and services
for distributed multiscale simulations across disciplines
exploiting existing and evolving European e-infrastructure
Deploy a computational science infrastructure
Deliver high quality componentsaiming at large-scale, heterogeneous, high performance multi-disciplinary multiscale computing.
Advance state-of-the-art in high performance computing on e-infrastructures
enable distributed execution of multiscale models across e-Infrastructures,
• Easy access using Web browser• Experiment workbench
• Constructing experiment plans from code snippets
• Interactively run experiments• Experiment Execution Environment
• Multiple interpreters• Access to libraries, programs and
services (gems)• Access to computing infrastructure:
Cluster, grid, cloud• Example applications using GS
• Binding sites in proteins• Analysis of water solutions of
aminoacids• Experience
• Virolab project• PL-Grid NGI
GridSpace
Preliminary experiment with multiscale application in GridSpace
• Multiscale dense stellar system simulations (from MUSE; http://www.muse.li)
• Two modules with different scales:– stellar evolution (macroscale)– stellar dynamics - N-body
simulation (mesoscale) • Data management
– masses of evolving stars sent from evolution (macroscale) to dynamics (mesoscale)
– no data is transmitted from dynamics to evolution
– dynamics should not outpace evolution
21
2
1
data
data
evolution dynamics
MUSE application
1000 ...
10021001
2000 ...
simulation time
1
2
No. of steps
Interactions between components in our experiment • We use a special communication bus
(called HLA) to synchronize simulation modules with time management
• Time management – Simulation modules called federates– regulating federate (evolution)
regulates the progress of the constrained federate (dynamics)
• federates exchange data with time stamps
• The furthest point in time which the constrained federate can reach at a given moment (LBTS) is calculated dynamically, according to the position of the regulating federate on the time axis
LBTS - Other federates will not send messages before this time.
Federate may only advance time within this interval
Federate’s current logical time.
Federate’s effective logical time.
Federate may not publish messages within this interval
Federate’s current logical time.
t=0
Lookahead
Constrained federate(dynamics)
Regulating federate (evolution)
Dynamics Evolution
HLA communication bus
Wrapping simulation models as software components
To enable users to steer the behavior of the simulation from outside we wrap simulation models into software components
We use the H2O framework• simulation modules can expose remotely accessible external
interfaces• implementations of simulation models are wrapped and placed
inside pluglets• containers for pluglets are called kernels• pluglets are deployed into kernels
Remote node
H2O kernelH2O pluglet
implementation of simulation
model
Start/stop
Change time policy
Switch on/off data exchange
Demo experiment
H2O kernel
node A
H2O kernel
node B
Ruby script(snippet 1)
Run PBS job
allocate nodes
start H2O kernelsGridSpace
user
PBS run job (start H2O kernel)
Ruby script(snippet 1)
Demo experiment
H2O kernel
node A
H2O kernel
node B
Jruby script(snippet 2)
Asks selected components to join simulation system
Asks selected components to publish or subscribe to data objects (stars)