Page 1
Grid Computing in Multidisciplinary Grid Computing in Multidisciplinary CFD optimization problemsCFD optimization problems
Toan NGUYEN
May 13-15th, 2003
Project OPALE
Parallel CFD Conference, Moscow (RU)
The challenge of Multi-physics Industrial ApplicationsThe challenge of Multi-physics Industrial Applications
Page 2
• PARALLEL CFD OPTIMIZATION
• STATE OF THE ART
• FUTURE TRENDS & CONCLUSION
OUTLINEOUTLINE
• MULTIDISCIPLINARY APPLICATIONS
• CURRENT ISSUES
• INRIA
Page 3
http://www.inria.fr
PART 1PART 1
Page 4
Created 1967
French Scientific and Technological Public Institute
Ministry of Research and Ministry of Industry
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE
National Research Institute for Computer Science and Automatic Control
Page 5
INRIA MISSIONS
• Fundamental and applied research• Design experimental systems• Technology transfer to industry• Knowledge transfer to academia• International scientific collaborations• Contribute to international programs• Technological assessment• Contribute to standards organizations
Page 6
RocquencourtRennes Lorraine
Sophia Antipolis
Rhône-Alpes
2.500 in six Research Centers• 900 permanent staff
• 400 researchers• 500 engineers, technicians and administrative pers.
• 500 researchers from other organizations
• 600 trainees, PhD and post-doctoral students
• 100 external collaborators• 400 visiting researchers
from abroad
Budget 120 MEuros (tax not incl.)25% self-funding through 600 contracts
Futurs
PERSONNEL
Page 7
CHALLENGES
• Expertise to program, compute and communicate using the Internet and heterogeneous networks
• Design new applications using the Web and multimedia databases
• Expertise to develop robust software• Design and master automatic control for complex systems• Combine simulation and virtual reality
Page 8
APPLICATIONS
• Telecommunications and multimedia
• Healthcare and biology• Engineering• Transportation• Environment
Page 9
RESEARCH PROJECTS
• Teams of approx. 20 researchers• Medium-term objectives and work program (4 years)• Scientific and financial independence• Links and partnerships with scientific and industrial
partners on national and international basis• Regular assessment of results during given time-
scale
Page 10
PROJECTS
99 Projects in four themes:
1 . Networks and Systems
2 . Software Engineering and Symbolic Computing
3 . Human-Computer Interaction, Image Processing, Data Management, Knowledge Systems
4 . Simulation and Optimization of Complex Systems
Page 11
INTERNATIONAL COOPERATION
Develop collaborations with European research centres and industries & strengthen the European scientific community in Information & Communication Technologies
Increase international collaborations and enhance exchanges• Cooperations with the United States, Japan, Russia• Relations with China, India, Brazil, etc.• Partnerships with developing countries• World Wide Web Consortium (W3C)
Work with the best industrial partners worldwide
Page 12
• Areas
• Located Sophia-Antipolis & Grenoble
• Follow-up SINUS project
• INRIA project (January 2002)
OPALEOPALE
NUMERIC OPTIMISATION (genetic, hybrid, …)MODEL REDUCTION (hierarchic, multi-grids, …)INTEGRATION PLATFORMS
Coupling, distribution, parallelism, grids, clusters, ...APPLICATIONS : aerospace, electromagnetics, …
Page 13
STATE OF THE ARTSTATE OF THE ART
PART 2PART 2
Page 14
GRID COMPUTINGGRID COMPUTING
• THE GRIDBUS PROJECT (Univ. Melbourne, Australia)
Page 15
GRID COMPUTINGGRID COMPUTING
• INFORMATION SERVICES
• RESOURCE MANAGEMENT
• DATA MANAGEMENT
Page 16
APPLICATIONSAPPLICATIONSNational Partnership for Advanced Computational Infrastructure
Page 17
GRID COMPUTINGGRID COMPUTING
• HIGH THROUGHPUT COMPUTING
• HIGH PERFORMANCE COMPUTING
• PETA-DATA MANAGEMENT
• LONG DURATION APPLICATIONS
Page 18
• HIGH-PERFORMANCE PROBLEM SOLVING ENVIRONMENTS
• AFFORDABLE HIGH-PERFORMANCE COMPUTING
GRID COMPUTINGGRID COMPUTING
• BUSINESS TO BUSINESS & E-COMMERCE
• LARGE SCALE SCIENTIFIC APPLICATIONS
• ENGINEERING, BIO-SCIENCES, EARTH & CLIMATE MODEL.
• IRREGULAR AND DYNAMIC BEHAVIOR APPLICATIONS
Page 19
GRID COMPUTINGGRID COMPUTING
• OPTIMALGRID PROJECT (IBM Almaden Resarch Center)
Page 20
• DISTRIBUTED HETERO. DYNAMIC RESOURCES & SERVICES
• DISCOVERY, SHARING, COORDINATED USE, MONITORING
GRID COMPUTINGGRID COMPUTING
• PERFORMANCE, SECURITY, SCALABILITY, ROBUSTNESS
• DYNAMIC MONITORING
• ADAPTIVE RESOURCE CONTROL
• ERROR AMPLIFIER SYNDROM
PERFORMANCE DIRECTED MANAGEMENT
Page 21
• BROKERING, FAULT DETECTION & TROUBLESHOOTING
GRID COMPUTINGGRID COMPUTING
• PLANNING & ADAPTING DISTRIBUTED APPLICATIONS
• NEED ENQUIRY, REGISTRATION PROTOCOLS
• CACHING, MIGRATING, REPLICATING DATAAPPLICATIONS : HIGH ENERGY PHYSICS (DATAGRID, PPDG, GriPhyN)
GRID SERVICES (OGSA)
LOCATION TRANSPARENCY, MULTIPLE PROTOCOL BINDINGS
COMPATIBLE UNDERLYING PLATFORMS
CREATE & COMPOSE DISTRIBUTED SYSTEMS
Page 22
GRID COMPUTINGGRID COMPUTING
• NSF Middleware Initiative : Globus, Condor-G, NWS, KX509, GSI-SSH, GPT
• ISI, Univ. Chicago, NCSA, SDSC, Univ. Wisconsin Madison
• NSF, Dept Energy, DARPA, NASA
GOAL : « national middleware infrastructure to permit seamless resource sharing across virtual organizations »
GRID Research, Integration, Deployment & Support center
PHILOSOPHY : « the whole is greater than the sum of its parts »
APPLICATIONS : NEES, GriPhyN, Intl Virtual Data Grid Lab (ATLAS)
Page 23
• PARALLEL & DISTRIBUTED PROGRAMMING
• SOFTWARE DEV. : FREE OPEN SOURCE (Linux, FreeBSD)
• DEVELOPMENT LARGE DISTRIBUTED DATA FILE SYTEMS
GRID COMPUTINGGRID COMPUTING
• BEOWULF CLUSTERS
• HIGH-SPEED GIGABITS/SEC NETWORKS
• COMPONENT PROGRAMMING
IncentivesIncentives
Page 24
BEOWULF CLUSTERBEOWULF CLUSTERPC-cluster at INRIA Rhône-Alpes (216 Pentium III procs.)
Page 25
« Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed
across multiple administrative domains, based on their (resources) availability, capability, performance, cost and users'
quality-of-service requirements.
If distributed resources happen to be managed by a single, global centralised scheduling system, then it is a cluster. In cluster, all nodes work cooperatively with common goal and
objective as the resource allocation is performed by a centralised, global resource manager.
In Grids, each node has its own resource manager and allocation policy. »
Rajkumar Buyya (Grid Infoware)
GRIDS vs. CLUSTERSGRIDS vs. CLUSTERS
Page 26
• PARALLELISM IS NOT DISTRIBUTION
• DISTRIBUTION SUPPORTS A LIMITED FORM PARALLELISM
DISTRIBUTION vs. PARALLELISMDISTRIBUTION vs. PARALLELISM
• PARALLELISM ALLOWS DISTRIBUTION
• GLOBUS WILL NOT PARALLELIZE YOUR CODE
YOU CAN DISTRIBUTE SEQUENTIAL CODES
YOU CAN DISTRIBUTE PARALLEL CODES
YOU CAN RUN SEQUENTIAL CODES IN « PARALLEL »
YOU CAN RUN SEQUENTIALLY PARALLEL CODES
Page 27
WHERE WE ARE TODAYWHERE WE ARE TODAY
• 1980 : one year CPU time
• 1992 : one month « »
• 1997 : four days « »
• 2002 : one hour « »
• ASCI White (LLNL) : 8.192 IBM SP Power 3 procs
• MCR Linux (LLNL) : 2.304 Intel 2.4 GHz Xeon procs
• ASCI Q (LANL) : 11.968 HP Alpha procs
Bits and pieces…
• Earth Sim (Japan) : 5.120 NEC procs
Moore’s law results…
Page 28
DISTRIBUTED SIMULATION PLATFORM
• MULTI-DISCIPLINE PROBLEM SOLVING ENVIRONMENTS
• HIGH-PERFORMANCE & TRANSPARENT DISTRIBUTION
• USING CURRENT COMMUNICATION STANDARDS
• USING CURRENT PROGRAMMING STANDARDS
• WEB LEVEL USER INTERFACES
• OPTIMIZED LOAD BALANCING & COMMUNICATION FLOW
What is required...
Page 29
• DISTRIBUTED : LAN, WAN, HSN...
• CODE-COUPLING FOR HETEROGENEOUS SOFTWARE
• COLLABORATIVE APPLICATIONS
• COMMON DEFINITION, CONFIGURATION, DEPLOYMENT, EXECUTION & MONITORING ENVIRONMENT
• TARGET HARDWARE : NOW, COW, PC clusters, ...
• TARGET APPLICATIONS : multidiscipline engineering, ...
INTEGRATIONINTEGRATION PLATFORMSPLATFORMS
Distributed tasks interacting dynamically in controlled and formally provable way
What they are...
Page 30
DISTRIBUTED OBJECTS ARCHITECTUREDISTRIBUTED OBJECTS ARCHITECTURE
SOFTWARE COMPONENTS
• COMPONENTS ARE DISTRIBUTED OBJECTS
• WRAPPERS AUTOMATICALLY (?) GENERATED
• COMPONENTS ENCAPSULATE CODES
• DISTRIBUTED PLUG & PLAY
Page 31
« CAST » INTEGRATION PLATFORM« CAST » INTEGRATION PLATFORM
CAST OPTIMIZERS
CORBA
SOLVERS
Server Wrapper Wrapper
Modules Modules
Page 32
SOFTWARE COMPONENTSSOFTWARE COMPONENTS
• BUSINESS COMPONENTS
LEGACY SOFTWARE
• OBJECT-ORIENTED COMPONENTS
• DISTRIBUTED OBJECTS COMPONENTS
• CASUAL METACOMPUTING COMPONENTS ?
C++, PACKAGES, ...
Java RMI, EJB, CCM, ...
Page 33
DISTRIBUTED OBJECTS ARCHITECTUREDISTRIBUTED OBJECTS ARCHITECTURESOFTWARE CONNECTORS
• CONNECTORS ARE SYNCHRONISATION CHANNELS
• SEVERAL PROTOCOLS
• CONNECTORS = DATA COMMUNICATION CHANNELS
- SYNCHRONOUS METHOD INVOCATION
- ASYNCHRONOUS EVENT BROADCAST
• COMPONENTS COMMUNICATE THROUGH SOFTWARE CONNECTORS
Page 34
• NEW APPLICATION METHODOLOGIES
• // SOFTWARE LIBRARIES : MPI, PVM, SciLab //, ...
• PARALLEL and/or DISTRIBUTED HARDWARE
• NESTING SEVERAL DEGREES PARALLELISM
PARALLEL APPLICATIONSPARALLEL APPLICATIONS
DOMAIN DECOMPOSITION
GENETIC ALGORITHMS
GAME THEORY
HIERARCHIC MULTI-GRIDS
The good news….
Page 35
NESTING PARALLELISMNESTING PARALLELISMLEVERAGE OPTIMISATION STRATEGIES
• COMBINE SEVERAL APPROACHES DOMAIN DECOMPOSITION
GENETIC ALGORITHMS
• // SOFTWARE LIBRARIES : MPI, ...
• GRIDS PC-CLUSTERS
…
Page 36
• Lays the ground for GRIDS and METACOMPUTING
• PC & Multiprocs CLUSTERS : thousands GHz procs...
• HIGH-SPEED NETWORKS : ATM, FIBER OPTICS...
ADVANCES IN HARDWAREADVANCES IN HARDWARE
GLOBUS, LEGION
CONDOR, NETSOLVE
Gigabits/sec networks available (2.5, 10, …)
The best news….
Page 37
CLUSTER COMPUTINGCLUSTER COMPUTING PC-cluster at INRIA Rhône-Alpes (216 Pentium III + 200 Itanium procs. Linux)
Page 38
PARALLEL CFD OPTIMIZATIONPARALLEL CFD OPTIMIZATION
PART 3PART 3
Page 39
« CAST » INTEGRATION PLATFORM« CAST » INTEGRATION PLATFORM
GOALS
• TESTBED
• “DECISION” CORBA INTEGRATION PLATFORM
• DESIGN FUTURE HPCN OPTIMISATION PLATFORMS
COLLABORATIVE MULTI-DISCIPLINE OPTIMISATION
GENETIC & PARALLEL OPTIMISATION ALGORITHMS
CODE COUPLING FOR CFD, CSM SOLVERS & OPTIMISERS
COLLABORATIVE APPLICATIONS SPECIFICATION TOOL
Page 40
The front stage….
Page 41
PROCESS ALGEBRAPROCESS ALGEBRA
InitBCGA:InitHybrid:BGGA:(TRUE:(END)+FALSE:(FUN:(TRUE:(HYBRID:
(TRUE: (=>InitHybrid)+FALSE:(=>FUN)))+FALSE:(=>BGGA))))
Page 42
TEST CASETEST CASE
• SHOCK-WAVE INDUCED DRAG REDUCTION
• WING PROFILE OPTIMISATION (RAE2822)
• Euler eqns (Mach 0.84, aoa = 2°) + BCGA (100 gen.)
• 2D MESH : 14747 nodes, 29054 triangles
• 4.5 hours CPU time (SUN Micro SPARC 5, Solaris 2.5)
• 2.5 minutes CPU time (PC cluster 40 bi-procs, Linux)
Page 43
TEST CASETEST CASEWING PROFILE OPTIMISATION
Page 44
CAST DISTRIBUTED INTEGRATION PLATFORMCAST DISTRIBUTED INTEGRATION PLATFORM
NICE
RENNES
GRENOBLE
PC cluster
PC clustern CFD solvers
CAST
GA optimiser
PC clustersoftware
VTHD Gbits/s network
Page 45
-200
-100
0
100
-120 -20 80 180 280 380 480 580
Y
B(Xf,Yf)
Bezier spline
slat
A(Xs,Ys)
flap
APPLICATION EXAMPLEAPPLICATION EXAMPLEMULTI-ELEMENT WING PROFILE OPTIMISATION
Page 46
APPLICATION EXAMPLEAPPLICATION EXAMPLEWING GEOMETRY
Page 47
APPLICATION EXAMPLEAPPLICATION EXAMPLEOPTIMISATION STRATEGY
Page 48
Cas de test
Nproc CPU(seconde)
Accélération(T1/Ti)
1 1 5722 1
2 2 2583 2.01
3 5 1189 4.81
4 10 662 8.64
5 20 420 13.62
6 50 348 16.44
7 90 345 16.59
8 150 364 15.72
APPLICATION EXAMPLEAPPLICATION EXAMPLE
PERFORMANCE DATA
1h 35 mn
6 mn
Page 49
1 2 5 10 20 50 90 150
Nombre de processeurs
0
1,000
2,000
3,000
4,000
5,000
6,000
Tem
ps
de
CP
U (
seco
nd
e)
PHN-GA sur PCs-Cluster
APPLICATION EXAMPLEAPPLICATION EXAMPLE
PERFORMANCE DATA
Page 50
0
Nombre de processeurs
0
Tem
ps
de
CP
U
TcommunicationTalgorithmeTtotal
Ttotal=Tcommunication + Talgorithme
T
NP
APPLICATION EXAMPLEAPPLICATION EXAMPLE
PERFORMANCE DATA
Page 52
Check for syntaxe of request
NSD
ORB MICO
Event channell,i1, i2, i3, ….
IRD
Algogen.idl
AlgoGeni1,i2, i3, …, in
CAST
CfdSolvercfd1
CfdSolver cfd2
« CAST » INTEGRATION PLATFORM« CAST » INTEGRATION PLATFORMBehind the stage, again...
GRID 3 PC-CLUSTERS
Page 53
Event channel,i1, i2, i3, …, in
CfdSolverCfd1
ProcessorP0
ProcessorP1
ProcessorP3
ProcessorP2
i1
CfdSolverCfd2
ProcessorP0
ProcessorP1
ProcessorP3
ProcessorP2
i2
CfdSolverCfd3
ProcessorP0
ProcessorP1
ProcessorP3
ProcessorP2
i3
Genetic Algorithm
i1, i2 ,i3, …, in
Parallelized with MPI on 4 processors
CORBA server implemented in C++
CORBA client implemented in C++
EMBEDDED PARALLELISMEMBEDDED PARALLELISM
Page 54
1 2 5 10 20 50 90 150
Nombre de processeurs
0
2
4
6
8
10
12
14
16
18
Tau
x d
'acc
eler
ati
on (
T 1/T
i)
PHN-GA sur PCs-Cluster
APPLICATION EXAMPLEAPPLICATION EXAMPLE
PERFORMANCE DATA
Page 55
Ag2DWithCorba
0100200300400500600700800
1 2 3 4 5 6 7
Nb CfdSolvers
Tim
e (
s) SOPHIA
RENNES
GRENOBLE
CfdSolvers at Sophia, CAST at Grenoble
0 200 400 600 800 1000
Ag atGrenoble
Ag atSophia
Ag atRennes
Time (s)
6 Cfd
5 Cfd
4 Cfd
3 Cfd
2 Cfd
1 Cfd
* Curves quasi-parallels
=> same speed up, whatever the place.
* Join an horizontal asymptote:
time = 200 s
APPLICATION DEPLOYMENTAPPLICATION DEPLOYMENT
The game : load balancing,...
Page 56
MULTIDISCIPLINARY APPLICATIONSMULTIDISCIPLINARY APPLICATIONS
PART 4PART 4
Page 57
Data Bases
ModelingDeterministic/Stochastic Optimizers
Validation methodsAeroacoustics
Aerodynamics
Aeroelasticity
SafetyMedical application
Drag reduction
Industrial multi physics test cases
& requirements
DatabaseGraphic analysis toolsValidation guidelines
Noise reduction
ElectronicsfacilitiesMulti-Physics , Numerical Analysis,
Applied mathematics, grid computing
Thermal flows
AeronauticsPropulsion
Communication System
Integration Platform
Pollution reduction
Fluid atmosphericenvironment
MULTIDISCIPLINARY APPLICATIONS MULTIDISCIPLINARY APPLICATIONS
Page 58
• HIGH PERFORMANCE COMPUTING
• HIGH THROUGHPUT COMPUTING
APPLICATIONS REQUIREMENTS APPLICATIONS REQUIREMENTS
• MULTI-LAYERED ARCHITECTURE
HIGH ENERGY PHYSICS
CERN LHC FACILITY
BIOSCIENCES, ENGINEERING, ENVIRONMENTAL APPS, …
SATELLITE IMAGING
Page 59
• SHOULD OR COULD A GRID EMULATE A MAINFRAME ?
• HOW CAN COMPUTE MODELS BE ADAPTED TO MAKE BEST USE OF GRIDS ?
APPLICATIONS REQUIREMENTS APPLICATIONS REQUIREMENTS
• WHERE DO GRIDS NOT MAKE SENSE ?
• WHAT IS THE REAL COST OF OWNING A GRID ?
• CAN UNUSED POWER OF DESKTOP BE HARNESSED ?
• HOW TO USE GRIDS FOR HIGH I/O APPLICATIONS ?
• HOW TO DESIGN GRIDS FOR HIGH AVAILABILITY ?
Page 60
• EXISTING PLATFORMS
Globus, Condor, NetSOLVE, Legion, ….
DESIGN ALTERNATIVESDESIGN ALTERNATIVES
• EXISTING TOOLS
NWS, SUN GRID ENG….
Page 61
DESIGN ALTERNATIVESDESIGN ALTERNATIVES
• HARWARE & SOFTWARE ORIENTED ENV.
• PROBLEM ORIENTED ENVIRONMENTS
Optimize specific pbs & solution : ReMAP (Madeleine, DIET, FAST…)
System devlpt & optimisation : PARIS (PADICO, PACO, DO…)
OASIS (ProActive, …)
APACHE (Athapascan, …)
• APPLICATION ORIENTED
Ease of use : OPALE (CAST), …
Page 62
INTEGRATING MULTIDISCIPLINARY APPLICATIONS
INTEGRATING MULTIDISCIPLINARY APPLICATIONS
• INTEGRATION OF PARTNERS’ EXPERTISE TO DEPLOY COLLABORATIVE APPLICATIONS
• NETWORKED PC-CLUSTERS, COMPUTERS & DATABASES TO SUPPORT MULTIDISCIPLINARY CHALLENGES
• HIGH-LEVEL PROCEDURES FOR CONCURRENT ENGINEERING (CSCW, VIRTUAL ORGANIZATIONS & ENTERPRISES …)
• INCLUDE CAD/CAM, MULTI-PHYSICS SOLVERS & OPTIMIZERS
Page 63
SCALABILITYSCALABILITY
AIRFOIL OPTIMIZATIONONERA M6 SUPERSONIC WINGAOA = 3°, MACH 1.8
Optimized Initial profile
Page 64
PLATFORM REQUIREMENTS PLATFORM REQUIREMENTS
• NEED FOR VIRTUAL REALITY ENVIRONMENT ?
• NEED FOR CSCW PROCEDURES & SUPPORT ?
• NEED FOR GRID COMPUTING ?
• NEED FOR DISTRIBUTED DATABASE TECHNOLOGY ?
Page 65
PERFORMANCEPERFORMANCE
AIRFOIL OPTIMIZATIONONERA M6 SUPERSONIC WINGAOA = 3°, MACH 1.8
Page 66
MULTIPHYSICS APPLICATIONSMULTIPHYSICS APPLICATIONS
New methods and tools ( validation and optimization ) for solving Multidisciplinary Industrial Challenges
Multi Physics Validation expertise spread in Research and Industry• Cross fertilize Modeling, Experimentation and Scientific disciplines• Single expertize revisited in a multi-disciplinary context :
Complexity at interfaces: validation of interfaces in multi physics, multi-scale and multi-modeling to provide a unified view of experiments and numerics
Page 67
ROBUSTNESSROBUSTNESS
Page 68
MULTIPHYSICS APPLICATIONSMULTIPHYSICS APPLICATIONS
Multidisciplinary/Multicriteria Optimization expertise spread in Research and Industry
Complexity of search spaces: robustness and efficiency of hybridized deterministic/adaptive optimization methods
- deterministic and global optimizers
- evolutionary optimizers
- hierarchy, game strategies and decision methods
Complexity at interfaces:CAD/CAM and Parameterization/Optimization
Page 69
NEW CHALLENGESNEW CHALLENGESMULTIDISCIPLINARY DESIGN
• HIGH-LIFT DEVICES : 1 CRITERION / 1 DISCIPLINE (3D Navier-Stokes) : MAXIMIZE LIFT
• DRAG-BUFFETING : 2 CRITERIA / 1 DISCIPLINE (3D Navier-Stokes) : MINIMIZE CRUISE DRAG & MAXIMIZE Cz BUFFET
• AERO-ACOUSTICS & HIGH-LIFT DEVICES : 2 CRITERIA/ 2 DISCIPLINES (3D Navier-Stokes) : NOISE REDUCTION OF MULTI-ELEMENTS AIRFOILS DURING TAKE-OFF
• SUPERSONIC REGIME & BANG : 2 CRITERIA/ 2 DISCIPLINES (3D Navier-Stokes)
• SUPERSONIC REGIME & NOISE REDUCTION : 2 CRITERIA/ 2 DISCIPLINES (3D Navier-Stokes)
Page 70
Distributed Data Bases
Local SolversDeterministic/Stochastic Optimizers
Validation codes
RESEARCH CENTRESAND UNIVERSITIES
INDUSTRIES
Industrial multi physics test cases
High performance computers
Local DatabasesGraphic analysis toolsValidation guidelines
Multi-Physics optimisationPC clusters
GOVERMENTAL INSTITUTIONS
Generic multiphysics test cases
PC clusters
Communication SystemWeb-based system
Computing SystemGrid computing environment
Concurrent engineeringplatform
THE PLATFORM THE PLATFORM
Page 71
THE PLATFORM THE PLATFORM
• COMMUNICATION SYSTEM
Supports interactions among partners and collaborative applications
• A DISTRIBUTED DATA MGT SYSTEM
Supports remote partners data and test-cases
• A COMPUTING SYSTEM
Supports partners grid-computing resources (PC-clusters, files, …)
Page 72
CURRENT ISSUESCURRENT ISSUES
PART 5PART 5
Page 73
• APPLICATIONS CHARACTERIZATION
• MULTIDISCIPLINE OPTIMIZATION
• MULTIDISCIPLINE MODELLING
ONGOING EFFORTSONGOING EFFORTS
AERO-STRUCTURE, AERO-ACOUSTICS : tight coupling
COMBUSTION, POLLUTION, NOISE REDUCTION
• DISTRIBUTED APPLICATIONS SCHEDULING
I/O PATTERNS, REAL-TIME ADAPTIVE RESOURCE CONTROL
DYNAMIC MONITORING
loose coupling
Page 74
COLLABORATIVE PROJECTS
• Performance monitoring : dynamic load balancing
• Integrating applications with grid computing technology
• Dynamic resource co-allocation, process & data migration
• Virtual organisations
ONGOING EFFORTSONGOING EFFORTS
Page 75
• MAY OVERLAP & SPECIFIC VIEWS FEDERATED RESOURCES
• DYNAMIC COLLECTIONS USERS & RESOURCES
• DISTRIBUTED ALLOCATION MANAGEMENT & SCHEDULING
VIRTUAL ORGANISATIONSVIRTUAL ORGANISATIONS
• MEMBERSHIP & ACCESS PROTOCOLS
• SCALABLE & ROBUST ARCHITECTURE & PROTOCOLS
• AGGREGATIONS OF DISTRIBUTED RESOURCES (VIRTUE)
Page 76
VIRTUAL ORGANISATIONSVIRTUAL ORGANISATIONS
Page 77
• HIERARCHICAL, GLOBALLY UNIQUE NAMES
• UNRELIABLE FAILURE DETECTORS
VIRTUAL ORGANISATIONSVIRTUAL ORGANISATIONS
• RESOURCE NAME + PROVIDER SCOPE & NAME
• INFORMATION PROVIDER + AGGREGATE DIRECTORIES + VO
• GRIS : GRID RESOURCE INFORMATION SERVICE (GLOBUS)
• DISK SPLITTING (PABLO, AUTOPILOT)
Page 78
• GENERIC INFO. SERVICES FOR RESOURCE DISCOVERY
• VIRTUAL ORGANISATIONS : VIRTUE (Dan Reed, UIUC)
• DISTRIBUTED APPLICATIONS STEERING (AUTOPILOT)
INTEGRATION WITH GRIDSINTEGRATION WITH GRIDS
• MONITOR EXISTENCE & CHARACTERISTICS RESOURCES
• SERVICES & COMPUTATIONS MANAGEMENT
• INTERACTIVE REAL-TIME (I/O ?) PERFORMANCE TUNING
Page 79
Sensor designPERFORMANCE MONITORINGPERFORMANCE MONITORING
Page 80
How to integrate them in new PSE (Fortran, MPI vs. C, Java, C++) ?
LEGACY & NEW APPSLEGACY & NEW APPS
Interface with PSE (Sockets, CORBA, RMI, EJB, CCM, …) ?
Coupling with existing apps & maths libraries (user transparency) ?
Last but not least…
Page 81
FUTURE TRENDSFUTURE TRENDS
PART 6PART 6
Page 82
• DYNAMIC LOAD BALANCING & RESSOURCE ALLOC
• « COTS » PROGRAMMING
• METACOMPUTING
TOMORROW’S PSETOMORROW’S PSE
« COMPONENTS OFF THE SHELF »
« POWER SUPPLY PARADIGM APPLIED TO COMPUTING RESOURCES WORLDWIDE »
Behind the stage, again...
MONITOR, START, SUSPEND, RESUME, STOP, MIGRATE
REMOTE PROCESSES DYNAMICALLY
Page 83
CONCLUSIONCONCLUSION
• VIRTUAL ORGANIZATIONS
• « COTS » PROGRAMMING
• METACOMPUTING
FLEXIBLE & INTEROPERABLE APPS DEVELOPMENT
LARGE SCALE MULTIDISCIPLINARY APPLICATIONSCOLLABORATIVE ENVIRONMENTS
REAL CSCW ON FULL SCALE PRODUCTION PROJECTS
FULL USER CONTROL
Page 84
CONCLUSIONCONCLUSION
« THE DIGITAL DYNAMIC AIRCRAFT »LARGE DYNAMIC COLLABORATIVE ENVIRONMENTS
Page 85
REFERENCESREFERENCES
[email protected]
• http://www.inrialpes.fr/opale