1Wolfgang Gentzsch, D-GridOctober 17, 2006
Cracow Grid Workshop, October 2006
D-Grid in International Context
Wolfgang Gentzsch
with support from
Tony Hey et al, Satoshi Matsuoka, Kazushige Saga, Hai Jin, Bob Jones, Charlie Catlett, Dane Skow,
and the Renaissance Computing Institute atUNC Chapel Hill, North Carolina
2Wolfgang Gentzsch, D-GridOctober 17, 2006
Grid InitiativesInitiative Time Funding People *) Users
UK e-Science-I: 2001 - 2004 $180M 900 Res.UK e-Science-II: 2004 - 2006 $220M 1100 Res. Ind.
TeraGrid-I: 2001 - 2004 $90M 500 Res.TeraGrid-II: 2005 - 2010 $150M 850 Res.
ChinaGrid-I: 2003 - 2006 20M RMB 400 Res. ChinaGrid-II: 2007 – 2010 50M RMB *) 1000 Res.
NAREGI-I: 2003 - 2005 $25M 150 Res. NAREGI-II 2006 - 2010 $40M *) 250 *) Res. Ind.
EGEE-I: 2004 - 2006 $40M 800 Res.EGEE-II: 2006 - 2008 $45M 1000 Res. Ind.
D-Grid-I: 2005 - 2008 $25M 220 Res. D-Grid-II: 2007 - 2009 $25M 220 (= 440) Res. Ind.
*) estimate
3Wolfgang Gentzsch, D-GridOctober 17, 2006
Main Objectives of the Grid Projects
UK e-Science: To enable the next generation of multi-disciplinary collaborative science and engineering, to enable faster, better or different research. EGEE:To provide a seamless Grid infrastructure for e-Science that is available for scientists 24 hours-a-day.ChinaGrid: To provide a research and education platform by using grid technology for the faculties and students among the major universities in China.NAREGI:To do research, development and deployment of science grid middleware.TeraGrid:Create a unified Cyberinfrastructure supporting a broad array of US science activities using the suite of NSF HPC facilitiesD-Grid: Build and operate a sustainable grid service infrastructure for German research (D-Grid1) and research and industry (D-Grid2)
4Wolfgang Gentzsch, D-GridSeptember, 2006
Community Grids are all about:
• Sharing Resources: - Small, medium, large enterprises share networks, computers,
storage, software, data, . . .- Researchers share ditto and large experiments, instruments,
sensor networks, etc.
• Collaboration: - Enterprise departments with its suppliers and peers (e.g. design)- Research teams distributed around the world (HEP, Astro, Climate)
• Doing things which have not been possible before:- Grand Challenges needing huge amount of computing and data- Combining distributed datasets into on virtual data pool (Genome)- “Mass Grids” for the people (distributed digital libraries; digital school laboratories; etc)
5Neil Geddes
CCLRC e-Science
UK e-Science Grid
Cambridge
Newcastle
Edinburgh
Oxford
Glasgow
Manchester
Cardiff
SouthamptonLondon
Belfast
DL
RAL Hinxton
Application independent
May 2006Charlie Catlett ([email protected])
TeraGrid A National Production CI Facility
SDSC TACC
UC/ANL
NCSA
ORNL
PUIU
PSCNCAR
20+ Distinct Computing Resources 150TF today, 400TF by 2007
USC/ISI
Caltech
UNC
UW
Phase I: 2001-2004 Design, Deploy, Expand ($90M over 4 years)Phase II: 2005-2010 Operation & Enhancement($150M over 5 years beginning August 2005)
7
7
ChinaGrid (till now)
8Wolfgang Gentzsch, D-GridOctober 17, 2006
EGEE Partner Landscape
http://gridportal.hep.ph.ic.ac.uk/rtm/applet.html
9Wolfgang Gentzsch, D-GridSeptember, 2006
GOC, German Core Grid sites
PC²
RRZN
TUD
RZG
LRZ
RWTH
FZJ
FZK
FHG/ITWM
Uni-KA
Site Resource Amount
FZJ/ZAM IBM Supercomputer with 8,5 TFlopsSTK data robot system with 2,8 PByte
32 CPUs300 TByte
FZK/IWR 8 nodes Opteron 2x2.2 GHz 8 processors of a system NEC SX-5 1 p630 with 4 processors1 SX-6i to do tests2 nodes Opteron 2x2.2 GHz to do tests
100%50%50%50%50%
LRZ SGI high performance system with 20 TFlop/sIntel IA32 and IA 64 Cluster, IBM p690, SunFire 80
5%5% 5%
MPI/RZG IBM supercomputer with 4,5 TFlops, PC cluster with 2 TFlopsData robot system with 8 PByte
32 CPUs400 TByte
PC² Cluster of 400 Xeon 64 Bit processors, high performance visualization and FPGAs
10%
RWTH/RZ 2 SunFire 6900 with 24 UltraSPARC IV each 100%
TU-Dresden/ZIH
SGI O2K(56 proc)/O3K(192 proc.) : T3E (64 proc):PC cluster with 30 processors,end off 2005: new system with 1000 proc.
10%20%20%2%
Uni-H/RRZN PC-Cluster mit 64 CPUs assoc.
Uni-KA PC-Pool assoc.
FHG/ITWM assoc.
10Wolfgang Gentzsch, D-GridSeptember, 2006
The German D-Grid Initiative *)
D-Grid-1Services for Scientists
*) funded by the German Ministry for Education and Research
11Wolfgang Gentzsch, D-GridSeptember, 2006
German e-Science Initiative, Key Objectives
Building a Grid Infrastructure in Germany Combine the existing German grid activities for infrastructure, middleware, and applications Integration of the middleware components developed in the Community Grids
Development of e-science services for the research community Science Service Grid
Important: Continuing sustainable production grid infrastructure after the end of the funding period Integration of new grid communities (2. generation) Business models for grid services
12Wolfgang Gentzsch, D-GridSeptember, 2006
D-Grid Projects
Generic Grid Middleware and Grid Services
Integration Project
As
tro
-Gri
d
C3
-Gri
d
HE
P-G
rid
IN-G
rid
Me
diG
rid
ON
TO
VE
RS
E
WIK
ING
ER
WIS
EN
T
Te
xtg
rid
VIOLA eSciDoc
D-Grid Knowledge Management
. . .
Im W
iss
en
sne
tz
13Wolfgang Gentzsch, D-GridSeptember, 2006
Community GridsCommunity Grids
Generic Grid Middleware and Grid Services
Information and Knowledge Management
Grid specificDevelopments
Application
CGMiddle-
ware
Grid specificDevelopment
Application
CGMiddle-
ware
D-Grid Structure
Courtesy Dr. Krahl PT/BMBF
Integration ProjectIntegration Project
Wolfgang Gentzsch, D-GridSeptember, 2006
DGI Infrastructure Project
WP 1: D-Grid basic software components, sharing resources, large
storage, data interfaces, virtual organizations, management
WP 2: Develop, operate and support robust core grid infrastructure, resource description, monitoring, accounting, and billing
WP 3: Network (transport protocols, VPN), Security (AAI, CAs, Firewalls)
WP 4: Business platform and sustainability, project management, communication and coordination
Scalable, extensible, generic grid platform for future
Longterm, sustainable grid operation, SLAs based
Wolfgang Gentzsch, D-GridSeptember, 2006
D-Grid Middleware
Nutzer
ApplicationDevelopment
and User Access
GAT API
Data/Software
Resourcesin D-Grid
High-levelGrid
Services
Basic Grid Services
DistributedData Archive
User
NetworkInfrastructur
LCG/gLite
Globus 4.0.1
AccountingBilling
User/VO-Mngt
SchedulingWorkflow Management
Data management
Security
Plug-In
UNICORE
DistributedCompute Resources
GridSphere
Monitoring
16Wolfgang Gentzsch, D-GridSeptember, 2006
DGI Services, Available Dec 2006
• Sustainable grid operation environment with a set of core D-Grid middleware services for all grid communities
• Central registration and information management for all resources
• Packaged middleware components for gLite, Globus and Unicore and for data management systems SRB, dCache and OGSA-DAI
• D-Grid support infrastructure for new communities with installation and integration of new grid resources into D-Grid Help-Desk, Monitoring System and central Information Portal
17Wolfgang Gentzsch, D-GridSeptember, 2006
DGI Services, Dec 2006, cont.
• Tools for managing VOs based on VOMS and Shibboleth
• Test implementation for Monitoring & Accounting for Grid resources, and first concept for a billing system
• Network and security support for Communities (firewalls in grids, alternative network protocols,...)
• DGI operates „Registration Authorities“, with internationally accepted Grid certificates of DFN & GridKa Karlsruhe
• Partners support new D-Grid members with building their own „Registration Authorities“
18Wolfgang Gentzsch, D-GridSeptember, 2006
• DGI will offer resources to other Communities, with access via gLite, Globus Toolkit 4, and UNICORE
• Portal-Framework Gridsphere can be used by future users as a graphical user interface
• For administration and management of large scientific datasets, DGI will offer dCache for testing
• New users can use the D-Grid resources of the core grid infrastructure upon request
DGI Services, Dec 2006, cont.
19Wolfgang Gentzsch, D-GridSeptember, 2006
AstroGrid
20Wolfgang Gentzsch, D-GridSeptember, 2006
Climate research moves towards new levels of complexity:
Stepping from Climate (=Atmosphere+Ocean) to Earth System Modelling
Earth system model wishlist:
Higher spatial and temporal resolution
Quality: Improved subsystem models
Atmospheric chemistry (ozone, sulfates,..)
Bio-geochemistry (Carbon cycle, ecosystem dynamics,..)
Increased Computational demand factor: O(1000 -10000)
C3 Grid: Collaborative Climate Community Data and Processing Grid
21Wolfgang Gentzsch, D-GridSeptember, 2006
HEP-Grid: p-p collisions at LHC at CERN (from 2007 on)
Crossing rate 40 MHzEvent Rates: ~109 Hz
Max LV1 Trigger 100 kHzEvent size ~1 MbyteReadout network 1 Terabit/sFilter Farm ~107 Si2KTrigger levels 2Online rejection 99.9997% (100 Hz from 50 MHz)System dead time ~ %Event Selection: ~1/1013
Crossing rate 40 MHzEvent Rates: ~109 Hz
Max LV1 Trigger 100 kHzEvent size ~1 MbyteReadout network 1 Terabit/sFilter Farm ~107 Si2KTrigger levels 2Online rejection 99.9997% (100 Hz from 50 MHz)System dead time ~ %Event Selection: ~1/1013
Event rate
“Discovery” rate
LuminosityLow 2x1033 cm-2 s-1
High 1034 cm-2 s-1
Data analysis: ~1PB/year
Level 1 Trigger
Rate to tape
Courtesy David Stickland
22Wolfgang Gentzsch, D-GridSeptember, 2006
GridspezifischeEntwicklungen
Integration project
Cooperation and businessmodels
InGrid: Virtual Prototyping & Modeling in Industry
Molding Metal Forming
Fluid Processes
Groundwater Transportation
Knowledge-based support for engineering-specific
decision support
Support for engineering-specific Workflows
Distributed simulations-based product & process
optimization
Methods and models for solving engineering problems in Grids
Fluid-Structur/ Magneto-Hydro- dynamic Interaction
Security and trust models
Grid-specific developments
AP 2 AP 3 AP 4
23Wolfgang Gentzsch, D-GridSeptember, 2006
Raw Data
Metadata
Molecule
Homogenization
Target data
Metadata
Population
Metadata
Patient
Metadata
Illness
Metadata
Organ/Tissue
Metadata
Cell
Search, Find, Select
Access Control
Correlate, Process, Analyze
Resulting Data
Presentation
Final Result
MediGrid: Mapping of Characteristics, Features, Raw Data, etc
24Wolfgang Gentzsch, D-GridSeptember, 2006
D-Grid-2 Call (review of proposals: Sept 19)
‘Horizontal’ Service Grids: professional Service Providers for heterogeneous user groups in research and industry
‘Vertical’ Community Service Grids using existing D-Grid infrastructure and services, supported by Service Providers
D-Grid extensions, based on a D-Grid 1 gap analysis - Tools for operating a professional grid service - Adding business layer on top of D-Grid infrastructure - Pilot service phase with service providers and ‘customers’
!! Reliable grid services require sustainable grid infrastructure !!
25Wolfgang Gentzsch, D-GridOctober 17, 2006
Global Grid Community
26Wolfgang Gentzsch, D-GridOctober 17, 2006
Grid Middleware Stack, major modulesUK e-Science: Phase 1: Globus 2.4.3, Condor, SRB. Phase 2: Globus 3.9.5 und 4.0.1, OGSA-DAI, Web services.
EGEE: gLite distribution: elements of Condor, Globus 2.4.3 (via VDT distribution).
ChinaGrid: ChinaGrid Supporting Platform (CGSP) 1.0 is based on Globus 3.9.1, and CGSP 2.0 is implemented based on Globus 4.0.
NAREGI: NAREGI middleware and Globus 4.0.1 GSI and WS-GRAM
TeraGrid: GT 2.4. and 4.0.1: Globus GRAM, MDS for information, GridFTP & TGCP file transfer, RLS for data replication support, MyProxy for credential mgmnt
D-Grid: Globus 2.4.3 (in gLite) and 4.0.2, Unicore 5, dCache, SRB, OGSA-DAI, GridSphere, GAT, VOMS and Shibboleth
27
The Architecture of Science Gateway ServicesThe Users Desktop
SecuritySecurity Data ManagementService
Data ManagementService
AccountingService
AccountingService
Notification ServiceNotification Service
PolicyPolicy Administration& Monitoring
Administration& Monitoring
Grid OrchestrationGrid OrchestrationResource
Allocation
ResourceAllocation
Reservations And Scheduling
Reservations And Scheduling
TeraGrid Gateway Services
Web Services Resource Framework – Web Services Notification
Grid Portal Server
Grid Portal Server
Physical Resource Layer
Core Grid Services
Proxy CertificateServer / vault
Proxy CertificateServer / vault
Application EventsApplication EventsResource BrokerResource Broker
User MetadataCatalog
User MetadataCatalog
Replica MgmtReplica Mgmt
ApplicationWorkflow
ApplicationWorkflow
App. Resourcecatalogs
App. Resourcecatalogs
ApplicationDeployment
ApplicationDeployment
Courtesy Jay Boisseau
28
28
CGSP Architecture ChinaGrid Supporting Platform, a grid middleware for
ChinaGrid
29
29
ChinaGrid Middleware ( CGSP)
Grid Application Middleware
GridResource
Management
GridInformation
Service
GridData
Management
GridSecurityService
ImageGrid Application Middleware
ServiceManagement
ApplicationSolving
Environment
GridMonitor
RemoteVisualization
Grid Resources
Digital Virtual Man
Remote-sensingImage Processing
Medical Image Diagnoses
ImageGrid Applications
UserManagement
Secu
rity
30
NAREGI Software Stack (beta 1 2006)- WS(RF) based (OGSA) SW Stack -
Computing Resources and Virtual Organizations
NII IMS Research Organizations
Major University Computing Centers
(( WSRF (GT4+Fujitsu WP1) + GT4 and other services)WSRF (GT4+Fujitsu WP1) + GT4 and other services)
SuperSINET
Grid-Enabled Nano-Applications (WP6)
Grid PSEGrid Programming (WP2)
-Grid RPC -Grid MPI
Grid Visualization
Grid VM (WP1)
Packag
ing
DistributedInformation Service
(CIM)
Grid Workflow (WFML (Unicore+ WF))
Super Scheduler
Grid Security and High-Performance Grid Networking (WP5)
Data (W
P4)
31
Enabling Grids for E-sciencE
EGEE-II INFSO-RI-031688
Workload ManagementData Management
SecurityInformation & Monitoring
Access
gLite Grid Middleware Services
API
ComputingElement
WorkloadManagement
MetadataCatalog
StorageElement
DataMovement
File & ReplicaCatalog
Authorization
Authentication
Information &Monitoring
Application
MonitoringAuditing
JobProvenance
PackageManager
CLI
Accounting
Site Proxy
Overview paper http://doc.cern.ch//archive/electronic/egee/tr/egee-tr-2006-001.pdf
32Wolfgang Gentzsch, D-GridSeptember, 2006
Nutzer
ApplicationDevelopment
and User Access
GAT API
Data/Software
Resourcesin D-Grid
High-levelGrid
Services
Basic Grid Services
DistributedData Archive
User
NetworkInfrastructur
LCG/gLite
Globus 4.0.1
AccountingBilling
User/VO-Mngt
SchedulingWorkflow Management
Data management
Security
Plug-In
UNICORE
DistributedCompute Resources
GridSphere
Monitoring
D-Grid Middleware
33Wolfgang Gentzsch, D-GridOctober 17, 2006
Major Challenges with Implementing Globus
UK e-S, EGEE: GT 2.4 not a robust product. In the early days it took months to install, and numerous workarounds by EDG, LCG and the Condor team.
UK e-S: The move from GT 2.4 to OGSA-based GT 3 to WS-based GT 4during many of the UK grid projects was a disruption.
TeraGrid:GT is a large suite of modules, most of which need to be specially built for HPC environments. The tooling on which it is based is largely unfamiliar to system administrators and requires a training/familiarization process.
D-Grid: The code is very complex and difficult to install on the many different systems in heterogeneous grid environment.
August 2006
Charlie Catlett ([email protected])
Challenges
• Scale– What works for 4 sites and identical machines is difficult
to scale to 10+ sites and 20+ machines with many architectures
• Sociology– Requires high-level of buy-in from autonomous sites
• (to run software or adopt conventions not invented here...)
• Interoperation (e.g. with other Grids)– Requires adoption of common software stack
• (see Sociology)
August 2006
Charlie Catlett ([email protected])
Main ApplicationsUK e-Science: Particle physics, astronomy, chemistry, bioinformatics, healthcare, engineering, environment, pharmaceutical, petro-chemical, media and financial sectors
EGEE:2 pilot applications (physics, life science) and applications from other 7 disciplines.
ChinaGrid: Bioinformatics, image processing, computational fluid dynamics, remote education, and massive data processing
NAREGI:Nano-science applications
TeraGrid:Physics (Lattice QCD calculations, Turbulence simulations, Stellar models), Molecular Bioscience (molecular dynamics), Chemistry, Atmospheric Sciences
D-Grid1: Astrophysics, high-energy physics, earth science, medicine, engineering, libraries
36Wolfgang Gentzsch, D-GridOctober 17, 2006
Efforts for SustainabilityUK e-Science: National Grid Service (NGS), Grid Operations Support Center (GOSC), National e-Science Center (NeSC), Regional e-Science Centers, Open Middleware Infrastructure Institute (OMII), Digital Curation Center (DCC)EGEE:Plans to establish a European Grid Initiative (EGI) to provide persistent grid service federating national grid programmes starting in 2008ChinaGrid: Increasing numbers of grid applications using CGSP grid middleware packagesNAREGI:Software will be managed and maintained by Cyber Science Infrastructure Center of National Institute of InformaticsTeraGrid:NSF Cyberinfrastructure Office: 5 year Coop. Agreement. Partnerships with peer grid efforts and commercial web services activities in order to integrate broadly D-Grid: DGI WP 4: sustainability, services strategies, and business models
37
The Open Middleware Infrastructure Institute (OMII)
OMII is based at the University of Southampton, School of Electronics & Computer Science.
Vision: to become the source for reliable, interoperable and open-source grid middleware, ensuring continued success of grid-enabled e-Science in the UK.
OMII intends to:• Create a one-stop portal and software repository for open-source grid
middleware, including comprehensive information about its function, reliability and usability;
• Provide quality-assured software engineering, testing, packaging and maintenance of software in the OMII repository, ensuring it is reliable and easy to both install and use;
• Lead the evolution of grid middleware at international level, through a managed program of research and wide-reaching collaboration with industry.
38
The Digital Curation Center (DCC)
The DCC is based at the University of Edinburgh.DCC supports UK institutions with the problems involved in
storing, managing and preserving vast amount of digital data to ensure its enhancement and continuing long-term use.
The purpose of DCC is to provide a national focus for research into curation issues and to promote expertise and good practice, both nationally and internationally, for the management of all research outputs in digital format.
39Neil Geddes
CCLRC e-Science
National Grid Service
Interfaces
OGSI::LiteOGSI::Lite
August 2006
Charlie Catlett ([email protected])
TeraGrid Next Steps - Services-based
• Core services: define a “TeraGrid Resource”– Authentication & Authorization Capability
– Information Service
– Auditing/Accounting/Usage Reporting Capability
– Verification & Validation Mechanism
• Provides a foundation for value-added services.– Each Resource runs one or more added services, or “kits”
• Enables a smaller set of components than the previous “full” CTSS
• Advanced capabilities, exploiting architectures or common software
• Allows portals (science gateways) to customize service offerings
– Core and individual kits can evolve incrementally, in parallel
August 2006
Charlie Catlett ([email protected])
TeraGrid Science Gateways Initiative:Community Interface to Grids
• Common Web Portal or application interfaces (database access, computation, workflow, etc), standards (primarily web services)• “Back-End” use of grid services such as computation, information management, visualization, etc.• Standard approaches so that science gateways may readily access resources in any cooperating Grid without technical modification
TeraGridTeraGridGrid-XGrid-X Grid-YGrid-Y
August 2006
Charlie Catlett ([email protected])
TeraGrid Science Gateway Partner Sites
TG-SGW-Partners
21 Science Gateway Partners (and growing) - Over 100 partner Institutions
Contact: Nancy Wilkins-Diehr ([email protected])
August 2006
Charlie Catlett ([email protected])
Grid Interoperation Now
• Multi-Grid effort (20+ projects world-wide)– Interoperation vs. Interoperability
• Interoperability– “The ability of software and hardware on multiple machines from
multiple vendors to communicate“» Based on commonly agreed documented specifications and procedures
• Interoperation– (for the sake of users!) “Just make it work together”
» Opportunistic, exploit common software, etc.» Low hanging fruit, future interoperability
– Principles• “The perfect is the enemy of the good enough”
– Voltaire (based on an old Italian proverb)
• Focus on security at every step (initial work aimed at auth*)
44
44
Layered Infrastructure of ChinaGrid
High performance computing environment(campus grid)
ChinaGrid Supporting Platform (CGSP)
NUDT
THU
HUST
ZSUPKU SJTU XJTU
NEUSCUT
BUAA
SEU
SDU
Remoteeducation
grid
Imageprocessing
grid
Fluiddynamics
grid Massiveinformationprocessing gridBioinformatics
grid
45
NAREGI R&D Assumptions and Goals
• Future Research Grid Metrics for Petascale– 10s of Institutions/Centers, various Project VOs– > 100,000 users, > 100,000~1,000,000 CPUs
• Machines are very heterogeneousCPUs (super computers, clusters, desktops), OSes, local schedulers
– 24/7 usage, production deployment– Server Grid, Data Grid, Metacomputing …
• High Emphasis on Standards– Start with Globus, Unicore, Condor, extensive
collaboration– GGF contributions, esp. OGSATM reference implementation
• Win support of users– Application and experimental deployment essential– R&D for production quality (free) software– Nano-science (and now Bio) involvement, large testbed
46
List of NAREGI “Standards”(beta 1 and beyond)
• GGF Standards and Pseudo-standard Activities set/employed by NAREGI
GGF “OGSA CIM profile” GGF AuthZ GGF DAIS GGF GFS (Grid Filesystems) GGF Grid CP (GGF CAOPs) GGF GridFTP GGF GridRPC API (as Ninf-G2/G4)GGF JSDL GGF OGSA-BES GGF OGSA-Byte-IO GGF OGSA-DAI GGF OGSA-EMS GGF OGSA-RSS GGF RUS GGF SRM (planned for beta 2) GGF UR GGF WS-I RUS GGF ACS GGF CDDLM
• Other Industry Standards Employed by NAREGI
ANSI/ISO SQL DMTF CIM IETF OCSP/XKMS MPI 2.0 OASIS SAML2.0 OASIS WS-Agreement OASIS WS-BPEL OASIS WSRF2.0 OASIS XACML
• De Facto Standards / Commonly Used Software Platforms Employed by NAREGI
GangliaGFarm 1.1Globus 4 GRAMGlobus 4 GSI Globus 4 WSRF (Also Fujitsu WSRF for C binding)IMPI (as GridMPI)Linux (RH8/9 etc.), Solaris (8/9/10), AIX, …MyProxy OpenMPI Tomcat (and associated WS/XML standards) Unicore WF (as NAREGI WFML)VOMS
47
Enabling Grids for E-sciencE
EGEE-II INFSO-RI-031688
Sustainability: Beyond EGEE-II
• Need to prepare for permanent Grid infrastructure– Maintain Europe’s leading position in global science Grids– Ensure a reliable and adaptive support for all sciences– Independent of short project funding cycles– Modelled on success of GÉANT
Infrastructure managed in collaboration with national grid initiatives
48
Enabling Grids for E-sciencE
EGEE-II INFSO-RI-031688
European National Grid Projects
• Austria – AustrianGrid• Belgium – BEGrid• Bulgaria – BgGrid• Croatia – CRO-GRID• Cyprus – CyGrid• Czech Republic- METACentre• Denmark ?• Estonia – Estonian Grid• Finland• France – planned (ICAR)• Germany – D-GRID• Greece - HellasGrid• Hungary• Ireland - Grid-Ireland• Israel – Israel Academic Grid• Italy - planned
• Latvia – Latvian Grid• Lithuania - LitGrid• Netherlands – DutchGrid• Norway – NorGrid• Poland - Pioneer• Portugal – launched April’06• Romania – RoGrid• Serbia – AEGIS• Slovakia• Slovenia - SiGNET• Spain – planned• Sweden – SweGrid• Switzerland - SwissGrid• Turkey – TR-Grid• Ukraine - UGrid• United Kingdom - eScience
49Wolfgang Gentzsch, D-GridSeptember, 2006
D-Grid: Towards a Sustainable Infrastructure for Science and Industry
Govt is changing policies for resource acquisition (HBFG ! ) to enable a service model
2nd Call: Focus on Service Provisioning for Sciences & Industry
Strong collaboration with: Globus Project, EGEE, Deisa, CrossGrid, CoreGrid, GridCoord, GRIP, UniGrids, NextGrid, …
Application and user-driven, not infrastructure-driven
Focus on implementation and production, not grid research, in a multi-technology environment (Globus, Unicore, gLite, etc)
D-Grid is the Core of the German e-Science Initiative
50Wolfgang Gentzsch, D-GridSeptember, 2006
• Sensitive data, sensitive applications (medical patient records)• Different organizations get different benefits• Accounting, who pays for what (sharing!)• Security policies: consistent and enforced across the grid !• Lack of standards prevent interoperability of components• Current IT culture is not predisposed to sharing resources• Not all applications are grid-ready or grid-enabled• Open source is not equal open source (read the small print)• SLAs based on open source (liability?)• “Static” licensing model don’t embrace grid• Protection of intellectual property • Legal issues (FDA, HIPAA, multi-country grids)
Summary:Challenges for Research and Industry
51Wolfgang Gentzsch, D-GridSeptember, 2006
Summary: Lessons Learned and Recommendations
– Continuity: Grid infrastructure should be modified and improved in large cycles only: applications depend on infrastructure !
– Sustainability: Funding should be available after end of project, to guarantee services, support and continuous improvement.
– Interoperability: Use open-source software and standards especially in the infrastructure and application middleware layer.
– Collaboration: between infrastructure developers and the applications, to best utilize grid services and to avoid application silos.
– User-Friendliness: for easy adoption for new communities. Infrastructure group should offer installation, operation and support services.
– Grid Services: Centers of Excellence should specialize on specific services, e.g. integration of new communities, grid operation, utility services, training, support, etc.
– Participation of Industry: has to be industry-driven. Push from outside, even with goverment funding, is not promising. Success comes only from real needs e.g. through existing collaborations between research and industry
– and more…
52Wolfgang Gentzsch, D-GridOctober 17, 2006
The Innovation Engine
[email protected]@renci.org
Thank You ! Slides are available