High-Performance Computing Ecosystem in Europe July 15 th , 2009 Kimmo Koski CSC – The Finnish IT Center for Science
May 11, 2015
High-Performance Computing Ecosystem in Europe
July 15th, 2009
Kimmo Koski
CSC – The Finnish IT Center for Science
Topics
1. Terminology and definitions
2. Emerging trends
3. Stakeholders
4. On-going Grid and HPC activities
5. Concluding remarks
Terminology and pointers
HPC • High Performance Computing
HET, http://www.hpcineuropetaskforce.eu/ • High Performance Computing in Europe Taskforce, established in June 2006 with a
mandate to draft a strategy for European HPC ecosystem Petaflop/s
• Performance figure 1015 floating point operations (calculations) in second e-IRG, http://www.eirg.eu
• e-Infrastructure Reflection Group. e-IRG is supporting the creation of a framework (political, technological and administrative) for the easy and cost-effective shared use of distributed electronic resources across Europe - particularly for grid computing, storage and networking.
ESFRI, http://cordis.europa.eu/esfri/ • European Strategy Forum on Research Infrastructures. The role of ESFRI is to
support a coherent approach to policy-making on research infrastructures in Europe, and to act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI is preparing a European Roadmap for new research infrastructures of pan-European interest.
RI• Research Infrastructure
Terminology and pointers (cont.)
PRACE, http://www.prace-project.eu/ • Partnership for Advanced Computing in Europe• EU FP7 project for preparatory phase in building the European petaflop
computing centers, based on HET work DEISA-2, https://www.deisa.org/
• Distributed European Infrastructure for Supercomputing Applications. DEISA is a consortium of leading national supercomputing centers that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope.
EGEE-III, http://www.eu-egee.org/ • Enabling Grid for E-sciencE. The project provides researchers in
academia and industry with access to a production level Grid infrastructure, independent of their geographic location.
EGI_DS, http://www.eu-egi.org/ • An effort to establish a sustainable grid infrastructure in Europe
GÉANT2, http://www.geant2.net/ • Seventh generation of pan-European research and education network
Computational science infrastructure
Performance Pyramid
National/regional centers, Grid-collaboration
Local centers
EuropeanHPC center(s)
TIER 0
TIER 1
TIER 2
Capability Computing
Capacity Computing
PRACE
DEISA-2
EGEE-III
e-I
RG
Need to remember about petaflop/s…
What do you mean with petaflop/s?1. Theoretical petaflop/s?
2. LINPACK petaflop/s?
3. Sustained petaflop/s for a single extremely parallel application?
4. Sustained petaflop/s for multiple parallel applications? Note that between 1 and 4 there might be several years Petaflop/s hardware needs petaflop/s applications,
which are not easy to program, or not even possible in many cases• Do we even know how to scale over 100000 processors …
Emerging trends
ResearchCommunity-2
Human interaction
Workspace
Labs
Scientific Data
Computing, Grid
Network
Global Virtual Research Communities
ResearchCommunity-1
Humaninteraction
Workspace
Labs
Scientific Data
Computing, Grid
Network
ResearchCommunity-3
Human interaction
Workspace
Labs
Scientific Data
Computing, Grid
Network
VirtualCommunity
Human interaction
Workspace
Virtual Labs
Scientific Data
Grid
Network
Global Virtual Research Communities
VirtualCommunity
Human interaction
Workspace
Virtual Labs
Scientific Data
Grid
Network
VirtualCommunity
Humaninteraction
Workspace
Virtual Labs
Scientific Data
Grid
Network
Scientific Data
Grid
NetworkEcon
om
ies
of
Scale
Eff
icie
ncy
Gain
s
Data and information explosion
1 Gigabyte (1GB) = 1000MB CD album
1 Terabyte (1TB)= 1000GBWord yearlyBook production
1 Petabyte (1PB)= 1000TBOne LHC-experimentyearly data production
1 Exabyte (1EB)= 1000 PBWorld yearly information production
Petascale computing produces exascale data
HPC is a part of a larger ecosystem
HPC AND GRID INFRASTRUCTURES
DATA INFRASTRUCTURES AND SERVICES
SOFTWARE DEVELOPMENT
DISCIPLINARIES, USER COMMUNITIES
COMPETENCE
HPC Ecosystem to support the top
The upper layers of the pyramid • HPC centers / services• European projects (HPC/Grid, networking, …)
Activities which enable efficient usage of upper layers• Inclusion of national HPC infrastructures• Software development and scalability issues• Competence development
Interoperability between the layers
Stakeholders
Stakeholder categories in PRACE
Providers of HPC services European HPC and grid projects Networking infrastructure providers Hardware vendors Software vendors and the software developing
academic community End users and their access through related Research
Infrastructures Funding bodies on a national and international level Policy setting organisations directly involved in
developing the research infrastructure and political bodies like parliaments responsible for national and international legislation
Policy and strategy work
HET: HPC in Europe Taskforce http://www.hpcineuropetaskforce.eu/
e-IRG: e-Infrastructure Reflection Grouphttp://www.e-irg.org/
ESFRI: European Strategy Forum on Research Infrastructureshttp://www.cordis.lu/esfri/
ERA Expert Group on Research Infrastructures
ESFRI
Some focus areas
Collaboration between research and e-infrastructure providers
Horizontal ICT services Balanced approach: more focus on data, software
development and competence development Inclusion of different countries, different contribution
levels New emerging technologies, innovative computing
initiatives Global collaboration, for example Exascale computing
initiative Policy work, resource exchange, sustainable services
etc.
On-going Grid and HPC activities
EU infrastructure projects
Number of data infrastructure projects
GEANT
22
Supercomputing Drives Science through Simulation
EnvironmentWeather/ ClimatologyPollution / Ozone Hole
Ageing SocietyMedicineBiology
EnergyPlasma Physics
Fuel Cells
Materials/ Inf. TechSpintronics
Nano-science
23
PRACE InitiativeHPCEUR HET
History and First Steps
2004 2005 2006 2007 2008
Bringing scientists togetherCreation of the Scientific Case
Production of the HPC part ofthe ESFRI Roadmap;
Creation of a vision,involving 15 European countries Signature of the MoU
Approval of the project
Submission ofan FP7 project proposal
Project start
24
HET: The Scientific Case• Weather, Climatology, Earth Science
– degree of warming, scenarios for our future climate.– understand and predict ocean properties and variations– weather and flood events
• Astrophysics, Elementary particle physics, Plasma physics– systems, structures which span a large range of different length and time scales– quantum field theories like QCD, ITER
• Material Science, Chemistry, Nanoscience– understanding complex materials, complex chemistry, nanoscience– the determination of electronic and transport properties
• Life Science– system biology, chromatin dynamics, large scale protein dynamics, protein
association and aggregation, supramolecular systems, medicine• Engineering
– complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft,
– virtual power plant
25
First success: HPC in ESFRI RoadmapThe European Roadmap for Research Infrastructures is the first comprehensive definition at the European level
Research Infrastructures areone of the crucial pillars of the European Research Area
A European HPC service – impact foreseen: strategic competitiveness attractiveness for researchers supporting industrial
development
26
Second success: The PRACE Initiative
• Memorandum of Understanding signed by 15 States in Berlin, on April 16, 2007
• France, Germany, Spain, The Netherlands, UKcommitted funding for a European HPC Research Infrastructure (LoS)
New:
27
Third success: the PRACE project
Partnership for Advanced Computing in Europe
PRACEEU Project of the European Commission 7th Framework Program Construction
of new infrastructures - preparatory phase
FP7-INFRASTRUCTURES-2007-1
Partners are 16 Legal Entities from 14 European countriesBudget: 20 Mio €
EU funding: 10 Mio €
Duration: January 2008 – December 2009Grant no: RI-211528
28
PRACE Partners1 (Coord.) Forschungszentrum Juelich GmbH FZJ Germany
2 Universität Stuttgart – HLRS USTUTT-HLRS
Germany
3 LRZ der Bay. Akademie der Wissenschaften BADW-LRZ Germany
4 Grand Equipement national pour le Calcul I. GENCI France
5 Engineering and Phys. Sciences Research C. EPSRC United Kingdom
6 Barcelona Supercomputing Center BSC Spain
7 CSC Scientific Computing Ltd. CSC Finland
8 ETH Zürich - CSCS ETHZ Switzerland
9 Netherlands Computing Facilities Foundation NCF Netherlands
10 Joh. Kepler Universitaet Linz GUP Austria
11 Swedish National Infrastructure for Comp. SNIC Sweden
12 CINECA Consorzio Interuniversitario CINECA Italy
13 Poznan Supercomputing and Networking C. PSNC Poland
14 UNINETT Sigma AS SIGMA Norway
15 Greek Research and Technology Network GRNET Greece
16 Universidade de Coimbra UC-LCA Portugal
PRACE Work Packages
• WP1 Management• WP2 Organizational concept• WP3 Dissemination, outreach and training• WP4 Distributed computing• WP5 Deployment of prototype systems• WP6 Software enabling for prototype systems• WP7 Petaflop systems for 2009/2010• WP8 Future petaflop technologies
29
30
PRACE Objectives in a Nutshell
• Provide world-class systems for world-class science
• Create a single European entity • Deploy 3 – 5 systems of the highest performance
level (tier-0)• Ensure diversity of architectures• Provide support and training
PRACE will be created to stay
31
Representative Benchmark Suite• Defined a set of applications benchmarks
– To be used in the procurement process for Petaflop/s systems• 12 core applications, plus 8 additional applications
– Core: NAMD, VASP, QCD, CPMD, GADGET, Code_Saturne, TORB, ECHAM5, NEMO, CP2K, GROMACS, N3D
– Additional: AVBP, HELIUM, TRIPOLI_4, PEPC, GPAW, ALYA, SIESTA, BSIT
• Each application will be ported to appropriate subset of prototypes• Synthetic benchmarks for architecture evaluation
– Computation, mixed-mode, IO, bandwidth, OS, communication• Applications and Synthetic benchmarks integrated into JuBE
– Juelich Benchmark Environment
32
Mapping Applications to Architectures
• Identified affinities and priorities
• Based on the application analysis - expressed in a condensed, qualitative way– Need for different “general
purpose” systems– There are promising
emerging architectures
• Will be more quantitative after benchmark runs on prototypes
E = estimated
33 33
Installed prototypes
IBM BlueGene/P (FZJ)
01-2008
IBM Power6 (SARA)
07-2008
Cray XT5 (CSC)
11-2008
IBM Cell/Power (BSC)
12-2008
NEC SX9, vector part (HLRS)
02-2009 Intel Nehalem/Xeon (CEA/FZJ)06-2009
34 34
Summary of current Prototype Status
milestone IBM BlueGene/P
at FZJ
IBM Power6
at SARA
Cray XTat CSC
IBM Cell/Powerat BSC
NEC SX9/x86at HLRS
Intel Nehalem/Xeonat CEA/FZJ
system installed yes yes yes yes nearly yes
system in production yes yes yes yes nearly nearly
technical assessment yes nearly yes nearly started started
evaluation ofcommunication and I/Oinfrastructure
yes nearly yes nearly started started
evaluation andbenchmarking of userapplications
started started started started started(vector)
no
Status June 2009
35
Web site and the dissemination channels
• The PRACE web presence with news, events, RSS feeds etc. http://www.prace-project.eu
• Alpha-Galileo service: 6500 journalists around the globe: http://www.alphagalileo.org
• Belief Digital Library• HPC-magazines• PRACE partner sites, top 10
HPC usersThe PRACE website, www.prace-project.eu
36
PRACE Dissemination Package• PRACE WP3 has created a dissemination package including
templates, brochures, flyers, posters, badges, t-shirts, USB-keys, badges etc.
Heavy Computing 10^15: the PRACE t-shirtPRACE USB-key
The PRACE logo
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 38
DEISA: May 1st, 2004 – April 30th, 2008
DEISA Partners
DEISA2: May 1st, 2008 – April 30th, 2011
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 39
DEISA Partners
BSC Barcelona Supercomputing Centre SpainCINECA Consortio Interuniversitario per il Calcolo Automatico ItalyCSC Finnish Information Technology Centre for Science FinlandEPCC University of Edinburgh and CCLRC UKECMWF European Centre for Medium-Range Weather Forecast UK (int)FZJ Research Centre Juelich GermanyHLRS High Performance Computing Centre Stuttgart GermanyIDRIS Institut du Développement et des Ressources France
en Informatique Scientifique - CNRSLRZ Leibniz Rechenzentrum Munich GermanyRZG Rechenzentrum Garching of the Max Planck Society GermanySARA Dutch National High Performance Computing Netherlands
CEA-CCRT Centre de Calcul Recherche et Technologie, CEA FranceKTH Kungliga Tekniska Högskolan SwedenCSCS Swiss National Supercomputing Centre SwitzerlandJSCC Joint Supercomputer Center of the Russian Russia
Academy of Sciences
RI-222919
www.deisa.eu
DEISA 2008Operating the European HPC Infrastructure
Grand Challenge projects performed on a regular basis
>1 PetaFlop/s Aggregated peak performance
Most powerful European supercomputers for
most challenging projects
Top-level Europe-wide application enabling
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 41
DEISA Core Infrastructure and Services
Dedicated High Speed Network
Common AAA– Single sign on– Accounting/budgeting
Global Data Management– High performance remote I/O and data sharing with
global file systems– High performance transfers of large data sets
User Operational Infrastructure– Distributed Common Production Environment (DCPE)– Job management service – Common user support and help desk
System Operational Infrastructure– Common monitoring and information systems– Common system operation
Global Application Support
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 42
RENATER
FUNETSURFnet DFN
GARR
UKERNA
RedIris
1 Gb/s GRE tunnel
10 Gb/s wavelength
10 Gb/s routed
10 Gb/s switched
DEISA dedicated high speed network
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 43
AIXLL-MC
AIXLL
LINUXPBS Pro
Super-UXNQS II
GridFTP
LINUXMaui/Slurm
UNICOS/lcPBS Pro
LINUXLL
AIXLL-MC
AIXLL-MC
IBM P5
IBM P6 (& BlueGene/P)
IBM P6 & BlueGene/P
IBM P6
Cray XT4 & XT5
Cray XT4
SGI ALTIX
NEC SX8
IBM P6IBM PPC
IBM P6 (& BlueGene/P)
UNICOS/lcPBS Pro
AIXLL-MC
DEISA Global File System(based on MC-GPFS)
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 44
DEISASites
UnifiedAAA
Networkconnec-
tivity
Datatransfer
tools
Data stagingtools
Jobrerouting
Singlemonitorsystem
Co-reservation
and co-allocation
Workflowmanagem.
Multipleways toaccess
Commonproductionenvironm.
WANshared
filesystem
NetworkandAAA
layers
Job manag.layer and monitor.
Presen-tationlayer
Data manag.
layer
DEISA Software Layers
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 45
SupercomputerHardware Performance
Pyramid
SupercomputerApplication Enabling
RequirementsPyramid
EU
National
Local
Capability computing will always need expert support for application enabling and optimizations
The more resource demanding one single problem is, the higherare generally the requirements for application enabling including enhancing scalability
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 46
DEISA Organizational Structure
WP1 – Management
WP2 – Dissemination, External Relations, Training
WP3 – Operations
WP4 – Technologies
WP5 – Applications Enabling
WP6 – User Environment and Support
WP7 – Extreme Computing (DECI) and Benchmark Suite
WP8 – Integrated DEISA Development Environment
WP9 – Enhancing Scalability
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 47
Evolution of Supercomputing Resources
DEISA partners´ compute resources at DEISA project start:
~ 30 TF aggregated peak performance
DEISA partners´ resources at DEISA2 project start:Over 1 PF aggregated peak performance on state-of-the art
supercomputers
2004
2008
Cray XT4 and XT5, LinuxIBM Power5, Power6, AIX / LinuxIBM BlueGene/P, Linux (frontend)IBM PowerPC, Linux (MareNostrum)SGI ALTIX 4700 (Itanium2 Montecito), LinuxNEC SX8 vector system, Super UX
Systems interconnected with dedicated 10Gb/s network links provided by GEANT2 and NRENs
Fixed fraction of resources dedicated to DEISA usage
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 48
DEISA Extreme Computing Initiative(DECI)
DECI launched in early 2005 to enhance DEISA’s impact on science and technology
Identification, enabling, deploying and operation of “flagship” applications in selected areas of science and technology
Complex, demanding, innovative simulations requiring the exceptional capabilities of DEISA
Multi-national proposals especially encourage
Proposals reviewed by national evaluation committees
Projects chosen on the basis of innovation potential, scientific excellence, relevance criteria, and national priorities
Most powerful HPC architectures in Europe for the most challenging projects
Most appropriate supercomputer architecture selected for each project
Mitigation of the rapid performance decay of a single national supercomputer within its short lifetime cycle of typically about 5 years, as implied by Moore’s law
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 49
DEISA Extreme Computing Initiative
15 European countries
Austria Finland France Germany HungaryItaly Netherlands Poland Portugal Romania Russia Spain Sweden Switzerland UK
Involvements in projects from DECI calls 2005, 2006, 2007:
157 research institutes and universities
four other continents
North America, South America, Asia, Australia
from
with collaborators from
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 50
DECI call 200551 proposals, 12 European countries involved, co-investigator from US)30 mio cpu-h requested29 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+)
DECI call 200641 proposals, 12 European countries involved co-investigators from N + S America, Asia (US, CA, AR, ISRAEL)28 mio cpu-h requested 23 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+)
DECI call 200763 proposals, 14 European countries involved, co-investigators from
N + S America, Asia, Australia (US, CA, BR, AR, ISRAEL, AUS)70 mio cpu-h requested45 proposals accepted, ~30 mio cpu-h awarded (normalized to IBM P4+)
DECI call 2008 66 proposals, 15 European countries involved, co-investigators from N + S America, Asia, Australia134 mio cpu-h requested (normalized to IBM P4+)Evaluation in progress
DEISA Extreme Computing InitiativeCalls for Proposals for challenging supercomputing projects from all areas of science
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 51
DECI Project POLYRES
B. J. Reynwar et al.: Aggregation and vesiculation of membrane proteins by curvature mediated interactions , NATURE Vol 447|24 May 2007| doi:10.1038/nature05840
Cover Story of Nature - May 24, 2007
a) proteins (red) adhere on a membrane (blue/yellow) and locally bend it;
b) this triggers a growing invagination.
c) cross-section through an almost complete vesicle
Curvy membranes make proteins attractive
For almost two decades, physicists have been on the track of membrane mediated interactions. Simulations in DEISA have now revealed that curvy membranes make proteins attractive.
Nature 447 (2007), 461-464
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 52
Achievements and Scientific Impact
Brochures can be downloaded from http://www.deisa.eu/publications/results
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 53
DEISA EoI
Early adopters(Joint Research Activities)
2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
Start ofFP6 DEISA
Single project supportDEISA Extreme Computing Initiative
Support of Virtual Communities
and EU projects
FP6 DEISA FP7 DEISA2
Evolution of User Categories in DEISA
Preparatory Phase
Start ofFP7 DEISA2
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 54
Tier0 / Tier1 CentersAre there implications for the services?
T0 Centers Leadership-class European systems in competition to the leading systems worldwide, cyclically renewedGovernance structure to be provided by European organization (PRACE)
T1 CentersLeading national Centers, cyclically renewed, optionally surpassing the performance of older T0 machines National Governance structure
Services have to be the same in T0/T1Because of the change of the status of the systems, over timeFor user transparency of the different systems
(Only visible: Some services could have different flavors for T0 and T1)
Main difference between T0 and T1 centers: policy and usage models !
T1 centers can evolve to T0 for strategic/political reasonsT0 machines automatically degrade to T1 level by aging
RI-222919
SC'08 Austin 2008-11-19 Andreas Schott, DEISA 55
Summary
Evolvement of this European infrastructure towards a robust and persistent European HPC ecosystem
Enhancing the existing services, by deploying new services including
support for European Virtual Communities, and by cooperating and
collaborating with new European initiatives, especially PRACE
DEISA2 as the vector for the integration of Tier-0 and Tier-1
systems in Europe
To provide a lean and reliable turnkey operational solution
for a persistent European HPC infrastructure
Bridging worldwide HPC projects: To facilitate the support of international science communities with computational needs traversing existing political boundaries
EGEE Status April 2009
Infrastructure• Number of sites connected
to the EGEE infrastructure: 268
• Number of countries connected to the EGEE infrastructure: 54
• Number of CPUs (cores) available to users 24/7: ~139,000
• Storage capacity available: ~ 25 PB disk + 38 PB tape MSS
Users• Number of Virtual
Organisations using the EGEE infrastructure: > 170
• Number of registered Virtual Organisations: >112
• Number of registered users: > 13000
• Number of people benefiting from the existence of the EGEE infrastructure: ~20000
• Number of jobs: >390k jobs/day • Number of application domains
making use of the EGEE infrastructure: more than 15
Are we ready for the demand?
Testbeds UtilityProductive Use
National
Global
European e-Infrastructure
57
www.eu-egi.org 58
38 National Grid Initiatives
www.eu-egi.org 59
EGI Objectives (1/3)• Ensure the long-term sustainability of the European
infrastructure• Coordinate the integration and interaction between
National Grid Infrastructures• Operate the European level of the production Grid
infrastructure for a wide range of scientific disciplines to link National Grid Infrastructures
• Provide global services and support that complement and/or coordinate national services
• Collaborate closely with industry as technology and service providers, as well as Grid users, to promote the rapid and successful uptake of Grid technology by European industry
www.eu-egi.org 60
EGI Objectives (2/3)
• Coordinate middleware development and standardization to enhance the infrastructure by soliciting targeted developments from leading EU and National Grid middleware development projects
• Advise National and European Funding Agencies in establishing their programmes for future software developments based on agreed user needs and development standards
• Integrate, test, validate and package software from leading Grid middleware development projects and make it widely available
www.eu-egi.org 61
EGI Objectives (3/3)• Provide documentation and training material for
the middleware and operations. • Take into account developments made by
national e-science projects which were aimed at supporting diverse communities
• Link the European infrastructure with similar infrastructures elsewhere
• Promote Grid interface standards based on practical experience gained from Grid operations and middleware integration activities, in consultation with relevant standards organizations
EGI Vision Paperhttp://www.eu-egi.org/vision.pdf
Integration and interoperability
PRACE and EGI targeting a sustainable infrastructure DEISA-2 and EGEE-III project based
Sometimes national stakeholders are partners in multiple initiatives
Users do not necessarily care where they get the service as long as they get it
Integration PRACE-DEISA and transition EGEE-EGI possible, further on requires creative thinking
New HPC Ecosystem is being built…
New market for European HPC
44 ESFRI list new research infrastructure projects, 34 running a preparatory phase project• 1-4 years• 1-7 MEUR * 2 (petaflop computing 10 MEUR * 2)
Successful new research infrastructures start construction 2009-2011• 10-1000 MEUR per infrastructure• First ones start to deploy: ESS in Lund etc.
Existing research infrastructures are also developing• CERN, EMBL, ESA, ESO, ECMWF, ITER, …
Results:• Growing RI market, considerably rising funding volume • Need for horizontal activities (computing, data, networks, computational
methods and scalability, application development,…) • Real danger to build disciplinary silos instead of searching IT synergy
Sever
al B
EUR for I
CT
Some Key Issues in building the ecosystem
Sustainability• EGEE and DEISA are projects with an end• PRACE and EGI are targeted to be sustainable with no definitive end
ESFRI and e-IRG• How do the research side and infrastructure side work together?• Two-directional input requested
Requirement for horizontal services• Let’s not create disciplinary IT silos• Synergy required for cost efficiency and excellence
ICT infrastructure is essential for research• The role of computational science is growing
Renewal and competence• Will Europe run out of competent people?• Will training and education programs react fast enough?
Requirements of a sustainable HPC Ecosystem
1. How to guarantee access to the top for selected groups?
2. How to ensure there are competent users which can use the high end resources?
3. How to involve all countries who can contribute?
4. How to develop competence in home ground?
5. How to boost collaboration between research and e-infrastructure providers?
6. What are the principles of resource exchange (in-kind)?
European centers
National /regional centers,Grid-collaboration
Universities and local centers
Conclusions
Some conclusions
There are far too many acronyms in this field We need to collaborate in providing e-infrastructure
• From disciplinary silos to horizontal services• Building trust between research and service providers
Moving from project based work to sustainable research infrastructures
Balanced approach: focus not only on computing but also on data, software development and competence
Driven by user community needs – technology is a tool, not a target
ESFRI list and other development plans will boost the market of ICT services in research
Interoperability and integration of initiatives will be seriously discussed
Final words to remember
“The problems are not solved by computers nor by any other e-infrastructure, they are solved by people”
Kimmo Koski, today