Top Banner
1 NAREGI Project Goals Establishment of a research grid infrastructure NAREGI is a collaborative research project between industry, academia, and govern- ment organizations. Our goal is to develop a new science grid that will be able to perform large- scale simulations for next-generation R&D. The Center for Grid Research and Develop- ment and the Computational Nano-science Center serve as the bases of project activities. The Center for Grid Research and Develop- ment develops grid infrastructure software and network technology. The Computational Nano-science Center evaluates the new grid system using application software and advanced nano- science simulation technology. Revitalization of the IT industry through commercialization of grid middleware and strengthened international competitiveness Dissemination of grid environments through- out industry (through cooperation with The Computational Nano-science Center ) Leadership in the standardization of grid technology Cultivation of human resources specializing in IT technology for grids
14

NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

Mar 01, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

1

NAREGI Project Goals

Establishment of a research grid infrastructure

NAREGI is a collaborative research project between industry, academia, and govern-ment organizations. Our goal is to develop a new science grid that will be able to perform large- scale simulations for next-generation R&D.The Center for Grid Research and Develop-ment and the Computational Nano-science Center serve as the bases of project activities.The Center for Grid Research and Develop-ment develops grid infrastructure software and network technology.The Computational Nano-science Center evaluates the new grid system using application software and advanced nano-science simulation technology.

Revitalization of the IT industry through commercialization of grid middleware and strengthened international competitiveness

Dissemination of grid environments through-out industry (through cooperation with The Computational Nano-science Center )

Leadership in the standardization of grid technology

Cultivation of human resources specializing in IT technology for grids

Page 2: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

2

Background of Science Grid R &D (1)

Present Industrial Infrastructure

Next-generation Industrial Infrastructure

Nano-science / Biotechnology

Simulation Based on Quantum Theory+

Large-scale Computing Resources||

Design of Nano Structure

Construct Theoretical System for Heat, Structure, Fluid, and Electromagnetism

Understand Phenomenon throughObservation and Experiment

Support Product Development Power in the 21st Century

Advance Computer Simulation

Support Industry as the Theoretical Backbone to Product R&D

Computer Simulation is Essential to Understand Phenomena related to

Nano & Bio Technologies

Support Research and Development in the 21st Century

R&D-capable computer resources help establish the next-generation industrial infrastructure.

Page 3: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

3

Background of Science Grid R &D (2)

1) Development based on experiment / measurement data

2) Development based on combiningsingle-function simulations

1) System-wide large-scale analysis2) Multi-scale analysis3) Multi-physics analysis

Conventional R & D and Product Development

Next-generation R & D and Product Development

Coupled Simulation Requirements Computing Resource Requirements

1) Shorter development periods2) Understanding new phenomena3) Optimizing designs

Required computer capacity is based on data from the National Institute of Science and Technology Policy.

Current World’s Highest

Automobile Design / Chip Design100G

2004 2020

1T

100T

10P

2000 2010

100PComputing Resources

Required for Next-Generation R&D World’s Highest

Affordable

Protein Structure Analysis /Drug Design

Organic Material Function / Reaction Prediction

Reaction Condition Optimization / Semiconductor Function Prediction

Page 4: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

4

NAREGI R&D System

Project Leader (Dr.K.Miura, NII)

Page 5: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

5Relationship among NAREGI Organizations and Outputs of the Project

Use in Industry(New Intellectual

Product Development)

Vitalization of Industry

e – Infrastructure

Grid Middleware for Large Computer Centers

Personnel Training (IT and Application

Engineers)

Contribution to Interna-tional Scientific Community

and Standardization

Grid Applications Environment

High-performance & Secure Grid Networking

Resource Management in the Grid Environment

Grid Programming Environment

Grid-Enabled Nano-Applications

Center for Grid Research and Development

Grid Middleware(National Institute of Informatics)

Computational Methods for Nano-science using the Latest Grid Technology

Subjects of ResearchLarge-scale Computation

High Throughput Computation

New Methodology for Computational Science

Computational Nano-science Center

(Institute of Molecular Science)

Requirement from the Industry with regard to Science Grid for Industrial Applications

Solicited Research Proposalsfrom the Industry to Evaluate

Grid System with Nano-science Applications

Consortium for Promotion ofGrid Applications in Industry(Member company: 40 companies)

Science Grid Environment

Evaluation of Grid System with Nano Applications

Progress in the Latest Research and Development

(Nano, Biotechnology)

Productization of General-purpose

Grid Middleware for Scientific Computing

Page 6: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

6

NAREGI Server GridResource allocation on Server Grid, Automatic scheduling of executing jobs, where users can run their HPC application programs as it is or slightly modified.

Kyushu U

NII(Jinbocho)

IMR,Tohoku U

AIST,KEK (Tsukuba)ISSP (Kashiwa)

IMS(Okazaki)

SUPERSINET

Kyoto U

Seamless resource coordination across many different research organizations.

NAREGI Middlewares

Cross-Organizational Virtual Computer

Page 7: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

7

NAREGI Middleware Stack

Computing Resources

NII IMS Research Organizations etc

((Globus,Condor,UNICOREGlobus,Condor,UNICORE OGSAOGSA))

SuperSINET

Grid-Enabled Nano-Applications

Grid PSEGrid Programming

-Grid RPC-Grid MPI

Grid Visualization

Grid VM

Packaging

DistributedInformation Service

Grid Workflow

Super Scheduler

High-Performance & Secure Grid Networking

Page 8: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

8

MPI

RISMJob

LocalScheduler

LocalScheduler

IMPIServer

GridVM

FMOJob

LocalScheduler

SuperScheduler

WFT

RISMsourceFMO

source

Work-flow

PSECA

Site A Site B(SMP machine)

Site C(PC cluster)

6: Co-Allocation

3: NegotiationAgreement

6: Submission

10: Accounting

10: Monitoring

4: Reservation

5: Sub-Job

3: Negotiation

1: Submissionc: Editb: Deploymenta: Registration

CA CA CA

Resource Query

GridVM GridVM

DistributedInformation

Service

GridMPI

RISMSMP

64 CPUs

FMOPC Cluster128 CPUs

Grid Visualization

Output files

Input files

IMPI

Scenario for Multi-site MPI Job Execution

Page 9: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

9Adaptation of Nano-science Applications to Grid Environment

Site A

MPICH-G2, Globus

RISMRISM FMOFMO

Site BMPICH-G2, Globus

Data Transformationbetween Different Meshes

Electronic Structurein Solutions

Electronic StructureAnalysis

Solvent DistributionAnalysis

Grid MiddlewareGrid Middleware

RISM FMO Fragment Molecular Orbital methodReference Interaction Site Model

Page 10: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

10

NAREGI Five-year Plan

Page 11: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

11

NAREGI Phase 1 Testbed

~3000 CPUs~17Tflops

SuperSINET

Center for GRID R&D(NII)

~5 Tflops

Computational Nanoscience Center(IMS)

~10 Tflops

Osaka Univ.BioGrid

TiTechCampus Grid

AISTSuperCluster

ISSPSmall Test App Clusters

Kyoto Univ.Small Test App Clusters

Tohoku Univ.Small Test App Clusters

KEKSmall Test App Clusters

Kyushu Univ.Small Test App Clusters

AISTSmall Test App Clusters

Construction of a Testbed of HeterogeneousEnvironments for the Verification ofGrid System Development

Page 12: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

12Computer System for Grid Software Infrastructure R & D

Center for Grid Research and Development(5Tflops,700GB)

File Server(PRIMEPOWER 900 +ETERNUS3000 + ETERNUS LT160)

(SPARC64V1.3GHz)(SPARC64V1.3GHz)1node1node/8CPU

SMP type Compute Server (PRIMEPOWER HPC2500)

1node (UNIX, 1node (UNIX, SPARC64V1.3GHz/64CPU)SPARC64V1.3GHz/64CPU)

SMP type Compute Server(SGI Altix3700)

1node 1node (Itanium2 1.3GHz/32CPU)(Itanium2 1.3GHz/32CPU)

SMP type Compute Server(IBM pSeries690)

1node 1node (Power4(Power4 1.3GHz1.3GHz/32CPU)/32CPU)

InfiniBand 4X(8Gbps)

InfiniBand 4X (8Gbps)

Distributed-memory typeCompute Server(HPC LinuxNetworx )

GbE (1Gbps)

Distributed-memory type Compute Server(Express 5800)

GbE (1Gbps)

GbE (1Gbps)

GbE (1Gbps)

SuperSINET

High Perf. Distributed-memory Type Compute Server (PRIMERGY RX200)

High Perf. Distributed-memory type Compute Server (PRIMERGY RX200)

Memory 130GBStorage 9.4TBMemory 130GBStorage 9.4TB

128CPUs128CPUs(Xeon, 3.06GHz)(Xeon, 3.06GHz)+Control Node+Control Node

Memory 65GBStorage 9.4TBMemory 65GBStorage 9.4TB

128 128 CPUsCPUs(Xeon(Xeon, 3.06GHz), 3.06GHz)+Control Node+Control Node

Memory 65GBStorage 4.7TBMemory 65GBStorage 4.7TB

Memory 65GBStorage 4.7TBMemory 65GBStorage 4.7TB

Memory 65GBStorage 4.7TBMemory 65GBStorage 4.7TB

Memory 65GBStorage 4.7TBMemory 65GBStorage 4.7TB

128 CPUs 128 CPUs (Xeon, 2.8GHz)(Xeon, 2.8GHz)+Control Node+Control Node

128 CPUs 128 CPUs (Xeon, 2.8GHz)(Xeon, 2.8GHz)+Control Node+Control Node

128 128 CPUs(XeonCPUs(Xeon, 2.8GHz)+Control Node, 2.8GHz)+Control Node

Distributed-memory type Compute Server (Express 5800)

128 CPUs 128 CPUs (Xeon, 2.8GHz)(Xeon, 2.8GHz)+Control Node+Control Node

Distributed-memory typeCompute Server(HPC LinuxNetworx )

Ext. NW

Intra NW-AIntra NW

Intra NW-B

L3 SWL3 SW1Gbps1Gbps

((upgradableupgradableTo 10Gbps)To 10Gbps)

L3 SWL3 SW1Gbps1Gbps

((UpgradableUpgradableto 10Gbps)to 10Gbps)

Memory 16GBStorage 10TBBack-up Max.36.4TB

Memory 16GBStorage 10TBBack-up Max.36.4TB

Memory 128GBStorage 441GBMemory 128GBStorage 441GB

Memory 32GBStorage 180GBMemory 32GBStorage 180GB

Memory 64GBStorage 480GBMemory 64GBStorage 480GB

Page 13: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

13Computer System for Nano Application R & D

Computational Nano science Center(10Tflops,5TB)

SMP type Computer Server

Memory 3072GBStorage 2.2TBMemory 3072GBStorage 2.2TB

Distributed-memory type Computer Server(4 units)818 CPUs(Xeon, 3.06GHz)+Control Nodes818 CPUs(Xeon, 3.06GHz)+Control NodesMyrinet2000 (2Gbps)

Memory 1.6TBStorage 1.1TB/unitMemory 1.6TBStorage 1.1TB/unit

File Server16CPUs (SPARC64 GP, 675MHz)(SPARC64 GP, 675MHz)

Memory 8GBStorage 30TBBack-up 25TB

Memory 8GBStorage 30TBBack-up 25TB

5.4 TFLOPS 5.0 TFLOPS16ways16ways××50nodes (POWER4+ 1.7GHz)50nodes (POWER4+ 1.7GHz)MultiMulti--stage Crossbar Networkstage Crossbar Network

L3 SWL3 SW1Gbps1Gbps

((UpgradableUpgradable to to 10Gbps)10Gbps)

VPN

Firewall

SuperSINETCenter for Grid R & D

Front-end ServerFront-end Server

CA/RA Server

Page 14: NAREGI Project Goals · 2007. 4. 20. · 8 MPI RISM Job Local Scheduler Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM sourceFMO source Work-flow

14

Summary

1) Grid will be not only the information infrastructure, but also the R & D base and the industrial infrastructure in the 21st century.

2) Develop a grid middleware which seamlessly connects heterogeneous computer resources spanned across the nation.

3) Utilize the developed grid infrastructure by conducting research on leading-edge nano science and nano technology simulation applications on top of the infrastructure.

4) Hand down the developed grid technology to industry sector and lead to the strengthening of its international competitiveness.

5) Contribute to international standardization of grid technology