Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by JLU Changchun, Jilin, PRC, Sept 13-15, 2010.

Post on 27-Mar-2015

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Biosciences Working Group Update

Wilfred W. Li, Ph.D., UCSD, USA

Habibah Wahab, Ph.D., USM, Malaysia

Hosted by JLUChangchun, Jilin, PRC, Sept 13-15, 2010

Scientific Driver and Use Cases

http://www.reactome.org/http://www.wikipedia.org http://library.thinkquest.org/05aug/01479/prevention1.html

Harris et al, PNAS, 2006

Transparent access of applications on Avian Flu Grid through middleware

CNIC Duckling Portal

Konkuk Glyco-M*Grid

NBCR CADD

Relaxed Complex Scheme and Ensemble based Virtual Screening Contributed to HIV Integrase Inhibitor Development

“ Exploration of the structural basis for this unexpected result … suggests an approach to the development of integrase inhibitors with unique resistance profiles.”

D. Hazuda et al., Proc. Natl. Acad. Sci. USA (Aug. 2004), refers to Schames, et al. (2004).

Discovery of unexpected binding site in HIV-1 Integrase using MD and AutoDock: Schames, … & McCammon, J. Med. Chem. (released on web, early 2004)

February, 2006 – Phase III Clinical TrialsFebruary, 2007 – Name announced: Isentress (raltegravir)October, 2007 – FDA “fast track” approval

New Class of HIV Drugs: Merck & Co.

MK-0518

Source: A. McCammon

Ensemble-based Virtual Screening with Relaxed Complex SchemeNAMD2Amber

NCI Diversity Set: 3.3 MB, 2000 compounds;Required at each siteZINC subset: 200,000. A few hundred MB

Multiple targets: HA, NA subtypesEach target: 30~50 MD snapshots, 1~2 MB each

AutoDock4

Simulation Data: hundreds of GB

Docking Data: hundreds of MB

Total data to date: ~5 TB in long term storage. Each experiment is about 1 Petaflops accumulative in computation cost.

Source: Amaro

Advances in Computing Infrastructure Enables Complex Simulations of Biomolecular Systems

Amaro & Li, CTMC, 2010

Condor pool SGE Cluster PBS Cluster

Globus Globus Globus

Application Services

Opal GUI PMV/Vision Kepler

Transparent Access Layer for Applications

Grid/Cloud ResourcesGrid/Cloud Resources

Opal 2 for SaaS

9

Vision Workflow Snippet Using Opal• Two Major Steps

1. Run PDB2PQR web service.This step is skipped if an appropriate PQR file exists on the local machine.

2. Run PrepareReceptor web service.

Output is URL to PDBQT

• PDB2PQR and PrepareReceptor are skipped if an appropriate PDBQT file exists on the local machine.

– Output is PDBQT path on local machine.

Macro that runsPDB2QR web service.

Macro that runsPrepareReceptor

web service

Virtual Screening with CSF • Virtual screening web services with remote clusters including

TeraGrid and PRAGMA Grid resources.

Virtual cluster at SDSC

AMAZON EC2

• OPAL as resource manager of CSF4• CSF4 allocate service instances of OPAL for jobs

1111

New OPAL-CSF4 Cloud model

PRAGMA 19 workshop, Changchun, Jilin, China, Sep.13-15, 2010.

2 – 4 March 2010 PRAGMA 18, San Diego 12

Other Examples of Continued Software Development at Member Institutions

– Drugscreener-G – KISTI, Korea– Grid Enabled Virtural Screening Service – ASGC,

Taiwan– CADD Pipeline – NBCR, USA– WISDOM project – CNRS, EU– Glyco-M*Grid – Kookmin & Konkuk U, Korea

Integrating Visualization Workflows using Real-time bioMEdical data Streaming and visualization (RIMES)

Kevin Dong, CNIC

Lau, Haga and Date

ViewDock TDW

Biomedical CLOUD

Resource Manager(edu.sdsc.nbcr.opal.manager.CSFJobManager)

Service Manager

Scheduling: Workflow Job

Array Job

AutoDock NAMD

OPAL2

CSF4

User Interface

Grid Sites

MetaScheduler

generate RSL files

Grid Resources

Input/Ouput Files: StageIn and StageOut

VM Replication Experimenthttp://goc.pragma-grid.net/wiki/index.php/VC-replication-2

SDSC VM hosting server AIST VM hosting server

AFG VM(original) AFG VM

(copy)

• VM hosting server: •Rocks 5.3 Xen roll

• Avian Flu Grid VM• Rocks VM• Globus/SGE• Autodock

• Replication updates • hostname and IP • Compute nodes• Network configurations• Globus configuration• SGE configuration

NBCR VM hosting server

AFG VM(copy)

VM replication

Milestones Update

• Production use of Gfarm for sharing simulation data– Production use by PRAGMA 20

• Gfarm roll to be deployed, 48 TB• Gfarm 2.4 setup on smaller scale at the moment

• Virtual machine scheduling using CSF4– Elastic Virtual Cluster under development

Meeting the New Challenges

• Virtualization – What does it mean to us?– Virtual machines, CSF server, Gfarm server and virtual

clusters

• Production environment – Where is it? What form should it take? -- EC2, VC replication

• Collaboration – How to stay in touch better, PRIME, MURPA, research in general? – PRAGMA Insitute @ PRAGMA 19

PRAGMA 19 Breakout Sessions

• Day 1– Joint session with Resources WG

• Prof. Kang from Konkuk Univ reported on plant pathogen structural proteomics and drug discovery activity

– Expressed support for open source and free cloud services for drug discovery

– Worried about the huge demand it may create on any service provider

» What’s the economics model? What’s the accounting mechanism? Not something we are worrying about right now, but hopefully soon.

Look around session

• Day 2– Presentation by HKU, Bao et al, on H1N1/H5N1

expression profiling using RNA-seq– Presentation by JLU, Xi et al, CSF4-

ResourceManager Opal implementation– Presentation by Kookmin Univ, M*Grid

Look ahead session

• Day 2-– Duckling portal as a new generation user portal

• Current focus: better user management, online editing, status notification

• Possible features: – Support for Opal service? Compute cloud access?– Support for larger data size? Or Data cloud access?– Support for Open ID? Social network access?– Continued support for RIMES?

Looking ahead

– M*Grid portal• Current status: pending deployment in PLSI e-science

project, with Gfarm filesystem browser• Possible features:

– Duckling portal as the new portlet framework?– Possible metascheduler in resource selection?– Possible Opal service support? M*Grid job execution

environment is quite feature rich, and specific for simulation jobs. Can Opal service support provide more benefits?

Looking ahead

• CSF4– Current focus: CSF4 support for Opal services

(maybe globus no longer needed for job execution), cloud service metascheduler, bug fixes and release of 4.0.6

• Possible features: more efficient/advanced resource selection policies

• Gfarm– Current focus: Gfarm 2.4 deployment and

integration with Opal 2.3

Looking ahead

• NBCR CADD– Current focus: Release of 0.1 beta,

documentation, and RCS rescoring workflow– Possible features:

• Data cloud service• Metadata and job history

Strategies

• Intra-WG: Student exchange, more regular joint meetings through green technology.

• Inter-WG: Give resources on demand a real name by making demands– AFG VMs on demand in PRAGMA grid, and EC2– Stable PRAGMA data cloud service– Stable PRAGMA compute cloud service– PRAGMA duckling portal– Engage more scientific researchers and establish more

diverse use cases for cloud services

top related