-
OutlineTarget simulationAtomic Force Microscope Tip Induced
Anodic OxidationMultiscale hybrid QM/Classic SimulationBehavior and
requirementsImplementationGridRPC + MPIStrategy for the long
runOngoing experimentsenvironmentslive status and
demonstrationSummary and future work
National Institute of Advanced Industrial Science and
Technology
Target simulation
- Atomic Force Microscope Tip Induced Anodic Oxidation -
-
AFM nano-rubbingAtomic-scalefriction of MEMSe.g., stick-slip
processAFM anodic oxidatione.g., locally oriented liquid
crystal()Mechanical and Chemical Reactions with Scanning Probe
Microscopysmallerpressurelarger pressure
-
Relations between external strain, microscopic structure, and
oxidation2. Direction of motion3. Tip pressure4. Inserted molecules
(humidity)Oxidation at the contact region1. Atomic-scale
commensuration of tip and substrate5. Electron transfer
-
Hybrid QM(DFT)-CL(MD) Simulation SchemeHybrid
Coarse-Grained-Particles/MD simulation schemeHybrid QM(DFT)-CL(MD)
simulation scheme seamless coupling with the buffered-cluster
method adaptive choice of QM-region Financial supports: ACT-JST
(year 2001-2004), JST-CREST(2005-present)
-
Hybrid QM-CL Simulation Run: Slide direction Si-Si
dimersFormation of Si-Si bonds between tip and
substrateZoomoutview15fsv=0.009 /fsDetachment of saturation-H
atomsDetached QM-H atomExpansion of QM regionfix
-
Requirements by the simulationFlexibilityAdaptive expansion of
QM regionNumber of atoms in a QM region may increase or
decreaseNumber of QM regions may increase or decrease
RobustnessNeed to continue more than few weeks, few
monthsSimulation should be capable of fault
recoveryEfficiencyCompute-intensive QM simulation runs on hundreds
of cpusEach (independent) QM simulation runs on a different
cluster
National Institute of Advanced Industrial Science and
Technology
Implementation
- GridRPC + MPI -- Strategy for long run -
-
Algorithm and ImplementationAlgorithm
ImplementationMD partQM partinitial set-upCalculate MD forces of
QM+MD regionsUpdate atomic positions and velocitiesCalculate QM
force of the QM regionData of QM atomsQM forcesCalculate QM force
of the QM regionCalculate QM force of the QM regionCalculate MD
forces of QM region
-
Does the implementation give solutions for the
requirements?FlexibilityGridRPC enables dynamic join/leave of QM
servers.GridRPC enables dynamic expansion of a QM
server.RobustnessGridRPC detects errors and application can
implement a recovery code by itself.EfficiencyGridRPC can easily
handle multiple clusters.Local MPI provides high performance on a
cluster by fine grain parallelism.
-
Strategy for long runImpossible to run the simulation for few
months on fixed clusters.QM simulation will migrate to the other
cluster either by intentionally or unintentionally.intentional
migrationExceeds the maximum runtime for the clusterReservation
period has expiredunintentional migrationAny error/fault is
detectedThe next cluster will be selected by either reservation or
simple selection algorithm.Selection algorithm considersnumber of
available cpusnumber of requested cpusrecords of past
utilizationSimulation reads a host information file in every time
step.A cluster can join to/leave from the experiment
on-the-fly.
-
Examples of hosts information
NAME SDSC ID 2 ADDR rocks-52.sdsc.edu FROM 2005/4/18/12/30/30 TO
2006/9/18/12/30/30 MAX_AVAIL 86400 CPU_MAX 32 CPU_INIT 32
NAME F32-2 ID 9 ADDR fsvc001.asc.hpcc.jp FROM 2005/10/7/9/0/0 TO
2006/10/11/12/0/0 MAX_AVAIL 172800 CPU_MAX 128 CPU_INIT 64
National Institute of Advanced Industrial Science and
Technology
Ongoing experiment
- Experimental environments -- Live status and demonstration
-
-
Experimental Environments (as of Oct. 19)Used #CPU is decided
based onmemory size, busyness, and stability for launching MPI
processes
ClusterSiteUsed #CPUPhysical #CPU1F32-2AIST128136 (2 x
68)2F32-3AIST128264 (2 x 132)3P32AIST128256 (2 x 128)4M64AIST64256
(4 x 64)5ISTBSU. Tokyo128340 (2 x 170)6POOLTokushima U.32 47 (1 x
47)7ALABTITECH3260 (2 x 30)8Rocks-52SDSC16120 (4 x 30)9AMATAKU8 8
(1 x 12)10ASENCHC8 8 (2 x 8)11UMEAIST8 8 (2 x 14)12TGCNCSA8 8 (4 x
12)
-
Summary and future workGridRPC + MPI implements flexible,
robust, and high performance Grid applications.flexible allow
dynamic resource allocation / migrationrobust detect errors and
recover from faultsefficient manage hundreds to thousands of
CPUs.Will have a joint experiment with TeraGridSIMOX (Separation by
Implantation of Oxygen) simulation run for more than 1 week on 5 x
128 cpu clusters which are reserved in advance.Research issuesLoad
balancing between QM simulationsMore clever scheduling
algorithm