Top Banner
Grid Resource Brokering and Cost-based Scheduling With Nimrod-G and Gridbus Case Studies Rajkumar Buyya Cloud Computing and Distributed Systems (CLOUDS) Lab. The University of Melbourne Melbourne, Australia www.cloudbus.org
52

Grid Resource Brokering and Cost-based Scheduling With Nimrod-G and Gridbus Case Studies Rajkumar Buyya Cloud Computing and Distributed Systems (CLOUDS)

Jan 12, 2016

Download

Documents

Francis Terry
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Grid Resource Brokering and Cost-based Scheduling With Nimrod-G and Gridbus Case StudiesRajkumar BuyyaCloud Computing and Distributed Systems (CLOUDS) Lab. The University of Melbourne Melbourne, Australia www.cloudbus.org

    *

    AgendaIntroduction to Grid SchedulingApplication Models and Deployment ApproachesEconomy-based Computational Grid SchedulingNimrod-G -- Grid Resource BrokerScheduling Algorithms and Experiments on World Wide Grid testbedEconomy-based Data Intensive Grid SchedulingGridbus -- Grid Service BrokerScheduling Algorithms and Experiments on Australian Belle Data Grid testbed

  • Grid Scheduling: Introduction

    *

    Grid Resources and SchedulingSingle CPU(Time Shared Allocation)SMP(Time Shared Allocation)Clusters(Space Shared Allocation)Grid Resource BrokerUser ApplicationGrid Information ServiceLocal Resource ManagerLocal Resource ManagerLocal Resource Manager

    *

    Grid SchedulingGrid scheduling: Resources distributed over multiple administrative domainsSelecting 1 or more suitable resources (may involve co-scheduling)Assign tasks to selected resources and monitoring execution.Grid schedulers are Global SchedulersThey have no ownership or control over resourcesJobs are submitted to local resource managers (LRMs) as userLRMs take care of actual execution of jobs

    *

    Example Grid SchedulersNimrod-G - Monash UniversityComputational Grid & Economic-basedCondor-G University of WisconsinComputational Grid & System-centricAppLeSUniversity of California@San DiegoComputational Grid & System centricGridbus Broker University of Melbourne Data Grid & Economic based

    *

    Key Steps in Grid SchedulingSource: J. Schopf, Ten Actions When SuperScheduling, OGF Document, 2003.

    *

    Movement of Jobs: Between the Scheduler and a ResourcePush ModelManager pushes jobs from Queue to a resource.Used in Clusters, GridsPull ModelP2P Agent request for a job for processing from job-poolCommonly used in P2P systems such as Alchemi and SETI@HomeHybrid Model (both push and pull)Broker deploys an agent on resources, which pulls jobs from a resource.May use in Grid (e.g., Nimrod-G system).Broker also pulls data from user host or separate data host (distributed datasets) (e.g., Gridbus Broker).

    *

    Example Systems

    Job Dispatch ArchitecturePushPullHybridCentralisedPBS, SGE, Condor,Alchemi (when in dedicated mode)Windmill from CERN (used in Physics ATLAS experiment) Condor (as it supports non-dedicated owner specified policies)DecentralisedNimrod-G, AppLeS, Condor-G, Gridbus BrokerAlchemi, SETI@Home, UnitedDevice,P2P Systems, AnekaNimrod-G (push Grid Agent, which pulls jobs)

  • Application Models and their Deployment on Global Grids

    *

    Grid Applications and Parametric Computing Bioinformatics: Drug Design / Protein ModellingSensitivity experiments on smog formationNatural Language EngineeringEcological Modelling: Control Strategies for Cattle TickElectronic CAD: Field Programmable Gate ArraysComputer Graphics: Ray TracingHigh Energy Physics: Searching for Rare Events

    Finance: Investment Risk AnalysisVLSI Design: SPICE SimulationsAerospace: Wing DesignNetwork SimulationAutomobile:Crash Simulation Data Mining Civil Engineering:Building Design astrophysics

    *

    How to Construct and Deploy Applications on Global Grids ?

    Three Options/Solutions:Manual Scheduling - Use pure Globus commandsApplication Level Scheduling - Build your own Distributed App & SchedulerApplication Independent Scheduling Grid BrokersDecouple App Construction from Scheduling

    Perform parameter sweep (bag of tasks) (utilising distributed resources) within T hours or early and cost not exceeding $M.

    *

    Using Pure Globus commandsDo all yourself! (manually)Total Cost:$???

    *

    Build Distributed Application & Application-Level SchedulerBuild App and scheduler case by case basisE.g., MPI ApproachTotal Cost:$???

    *

    Compose and Deploy using Brokers Nimrod-G and Gridbus ApproachCompose Apps and Submit to the Broker Define QoS requirementsAggregate ViewCompose, Submit & Play!

  • The Nimrod-G Grid Resource Broker and Economy-based Grid Scheduling [Buyya, Abramson, Giddy, 1999-2001]Deadline and Budget Constrained Algorithms for Scheduling Applications on Computational Grids

    *

    A resource broker (implemented using Python) for managing, steering, and executing task farming (parameter sweep) applications on global Grids. It allows dynamic leasing of resources at runtime based on their quality, cost, and availability, and users QoS requirements (deadline, budget, etc.) Key FeaturesA declarative parameter programming languageA single window to manage & control experimentPersistent and Programmable Task Farming EngineResource DiscoveryResource Trading (User-Level) Scheduling & PredicationsGeneric Dispatcher & Grid AgentsTransportation of data & resultsSteering & data managementAccountingNimrod-G : A Grid Resource Broker

    *

    A Glance at Nimrod-G BrokerGrid MiddlewareNimrod/G ClientNimrod/G ClientNimrod/G ClientGrid Information Server(s)Schedule AdvisorTrading ManagerNimrod/G EngineGridStoreGrid ExplorerGE GISTM TSRM & TSGrid DispatcherRM: Local Resource Manager, TS: Trade ServerGlobus, Legion, Condor, etc.GGCLGlobus enabled node.Legion enabled node.GLCondor enabled node.RM & TSRM & TSCLSee HPCAsia 2000 paper!$$$

    *

    GlobusLegionFabricNimrod-G BrokerNimrod-G ClientsP-Tools (GUI/Scripting)(parameter_modeling)Legacy ApplicationsP2PGTSFarming EngineDispatcher & Actuators Schedule AdvisorTrading ManagerGrid ExplorerCustomised Apps(Active Sheet)Monitoring and Steering PortalsAlgorithm1AlgorithmNMiddleware. . .ComputersStorageNetworksInstrumentsLocal SchedulersG-Bank. . .AgentsResourcesProgrammable Entities ManagementJobsTasks. . .AgentSchedulerJobServerPC/WS/ClustersRadio TelescopeCondor/LL/NQS. . .DatabaseMeta-SchedulerNimrod/G Grid Broker ArchitectureChannels. . .DatabaseCondorGMDIP hourglass!Condor-AGlobus-ALegion-AP2P-A

    *

    A Nimrod/G MonitorDeadlineLegion hostsGlobus HostsBezek is in both Globus and Legion Domains

    *

    User Requirements: Deadline/Budget

    *

    Nimrod/G InteractionsGrid NodeCompute NodeUser Node

    *

    Discover ResourcesDistribute JobsEstablish RatesMeet requirements ? Remaining Jobs, Deadline, & Budget ?Evaluate & RescheduleDiscover More ResourcesAdaptive Scheduling StepsCompose & Schedule

    *

    Deadline and Budget Constrained Scheduling Algorithms

    Algorithm/ StrategyExecution Time(Deadline, D)Execution Cost(Budget, B)Cost OptLimited by DMinimizeCost-Time OptMinimize when possibleMinimizeTime OptMinimizeLimited by BConservative-Time OptMinimizeLimited by B, but all unprocessed jobs have guaranteed minimum budget

    *

    Deadline and Budget-based Cost Minimization SchedulingSort resources by increasing cost.For each resource in order, assign as many jobs as possible to the resource, without exceeding the deadline.Repeat all steps until all jobs are processed.

  • Scheduling Algorithms and Experiments

    *

    World Wide Grid (WWG)Globus+LegionGRACE_TSAustraliaMelbourne U. : Cluster

    VPAC: AlphaSolaris WSNimrod-G+GridbusGlobus +GRACE_TSEuropeZIB: T3E/OnyxAEI: Onyx Paderborn: HPCLineLecce: Compaq SCCNR: ClusterCalabria: Cluster CERN: ClusterCUNI/CZ: OnyxPozman: SGI/SP2Vrije U: ClusterCardiff: Sun E6500Portsmouth: Linux PCManchester: O3K

    Globus +GRACE_TSAsiaTokyo I-Tech.: Ultra WSAIST, Japan: Solaris ClusterKasetsart, Thai: ClusterNUS, Singapore: O2KGlobus/LegionGRACE_TSNorth AmericaANL: SGI/Sun/SP2USC-ISI: SGIUVa: Linux ClusterUD: Linux clusterUTK: Linux clusterUCSD: Linux PCsBU: SGI IRIXInternetGlobus +GRACE_TSSouth AmericaChile: Cluster

    *

    Application Composition Using Nimrod Parameter Specification Language#Parameters Declarationparameter X integer range from 1 to 165 step 1;parameter Y integer default 5;

    #Task Definitiontask main #Copy necessary executables depending on node type copy calc.$OS node:calc #Execute program with parameter values on remote node node:execute ./calc $X $Y #Copy results file to use home node with jobname as extension copy node:output ./output.$jobnameendtaskcalc 1 5 output.j1calc 2 5 output.j2calc 3 5 output.j3calc 165 5 output.j165

    *

    Experiment SetupWorkload:165 jobs, each need 5 minute of CPU timeDeadline: 2 hrs. and budget: 396000 G$Strategies: 1. Minimise cost 2. Minimise timeExecution:Optimise Cost: 115200 (G$) (finished in 2hrs.)Optimise Time: 237000 (G$) (finished in 1.25 hr.)In this experiment: Time-optimised scheduling run costs double that of Cost-optimised. Users can now trade-off between Time Vs. Cost.

    *

    Resources Selected & Price/CPU-sec.

    Resource & LocationGrid services & FabricCost/CPU sec.or unitNo. of Jobs ExecutedTime_OptCost_Opt.Linux Cluster-Monash, Melbourne, AustraliaGlobus, GTS, Condor264153Linux-Prosecco-CNR, Pisa, ItalyGlobus, GTS, Fork371Linux-Barbera-CNR, Pisa, ItalyGlobus, GTS, Fork461Solaris/Ultas2TITech, Tokyo, JapanGlobus, GTS, Fork391SGI-ISI, LA, USGlobus, GTS, Fork8375Sun-ANL, Chicago,USGlobus, GTS, Fork7424

    Total Experiment Cost (G$)237000115200Time to Complete Exp. (Min.)70119

    *

    Deadline and Budget Constraint (DBC) Time Minimization SchedulingFor each resource, calculate the next completion time for an assigned job, taking into account previously assigned jobs.Sort resources by next completion time.Assign one job to the first resource for which the cost per job is less than the remaining budget per job.Repeat all steps until all jobs are processed. (This is performed periodically or at each scheduling-event.)

    *

    Resource Scheduling for DBC Time Optimization

    Chart2

    000000

    410110

    610120

    710121

    910131

    1010142

    1010152

    800153

    1000044

    1000064

    10000113

    10000106

    1010198

    611399

    821398

    1131379

    1131339

    1131317

    1121319

    1031009

    930109

    920208

    930209

    930207

    931205

    922223

    913271

    10031111

    11030110

    11030110

    11020110

    9020100

    1002080

    1102170

    11022110

    11023112

    11023115

    10013119

    910399

    1000359

    1000209

    1000107

    1000006

    900007

    1000007

    1000007

    1000007

    1000007

    1000004

    700021

    700000

    700000

    700000

    700000

    400000

    100000

    500000

    500000

    500000

    500001

    400013

    400003

    200003

    300003

    300003

    300003

    300000

    300001

    100001

    100001

    000001

    000001

    000001

    000000

    000000

    000000

    Condor-Monash

    Linux-Prosecco-CNR

    Linux-Barbera-CNR

    Solaris/Ultas2-TITech

    SGI-ISI

    Sun-ANL

    Time (in Minute)

    No. of Tasks in Execution

    ExperimentStat

    ResourceCost/CPU-secCost/JobJobs Run-Cost OptimisedJobs Run- Time Optimised Sched.Total Resource Cost(Optimise Cost)Total Resource Cost (Optimise Time)

    Condor-Monash2600153649180038400

    Linux-Prosecco-CNR3900179006300

    Linux-Barbera-CNR412001612007200

    Solaris/Ultas2-TITech3900199008100

    SGI-ISI824005371200088800

    Sun-ANL72100442840088200

    Total Experiment Cost115200237000

    Time Taken to Finish Experiment (in Min.)11970

    Experimental Data

    1. Deadline: 2 hrs

    2. No of Tasks: 165

    3. Each Task is Modelled to run for: 5 minute

    4. Budget =396000 (Grid $ units)

    Formulas:

    Cost/Job = Cost_per_CPU_sec * Task_Exec_Time_Minute*60

    Total Resource Cost = No. of Jobs Run * Cost_per_Job

    CostOptimise.Sched

    Time (in min.)Condor-MonashLinux-Prosecco-CNRLinux-Barbera-CNRSolaris/Ultas2-TITechSGI-ISISun-ANLTotal CPUsCost of Resources in Use

    981619337.507000000000

    981619397.50715111111035

    981619457.50728111211449

    981619517.50739111221658

    981619577.507410111321868

    981619637.507511111332077

    981619697.507611111442292

    981619757.50779101441984

    981619817.50789000331563

    981619877.50799000321456

    981619937.507109000221348

    981619997.507119000111133

    981620057.50712800010924

    981620117.50713800000816

    981620177.50714900000918

    981620237.5071511000001122

    981620297.5071611000001122

    981620357.5071711000001122

    981620417.50718900000918

    981620477.5071910000001020

    981620537.50720900000918

    981620597.5072111000001122

    981620657.5072211000001122

    981620717.5072311000001122

    981620777.50724900000918

    981620837.5072511000001122

    981620897.5072610000001020

    981620957.5072711000001122

    981621017.5072811000001122

    981621077.5072910000001020

    981621137.50730900000918

    981621197.5073111000001122

    981621257.5073210000001020

    981621317.5073312000001224

    981621377.5073412000001224

    981621437.5073512000001224

    981621497.5073611000001122

    981621557.5073711000001122

    981621617.5073811000001122

    981621677.5073912000001224

    981621737.5074011000001122

    981621797.5074111000001122

    981621857.5074212000001224

    981621917.5074312000001224

    981621977.5074411000001122

    981622037.5074512000001224

    981622097.5074612000001224

    981622157.5074712000001224

    981622217.5074811000001122

    981622277.5074911000001122

    981622337.5075011000001122

    981622397.5075112000001224

    981622457.5075211000001122

    981622517.5075312000001224

    981622577.5075411000001122

    981622637.5075511000001122

    981622697.5075610000001020

    981622757.5075710000001020

    981622817.5075810000001020

    981622877.5075912000001224

    981622937.5076010000001020

    981622997.5076112000001224

    981623057.5076212000001224

    981623117.5076312000001224

    981623177.5076411000001122

    981623237.5076510000001020

    981623297.5076611000001122

    981623357.5076710000001020

    981623417.5076810000001020

    981623477.5076911000001122

    981623537.5077012000001224

    981623597.5077110000001020

    981623657.5077212000001224

    981623717.5077312000001224

    981623777.5077411000001122

    981623837.5077511000001122

    981623897.5077612000001224

    981623957.5077712000001224

    981624017.5077812000001224

    981624077.5077912000001224

    981624137.5078012000001224

    981624197.50781900000918

    981624257.5078212000001224

    981624317.5078312000001224

    981624377.5078411000001122

    981624437.5078512000001224

    981624497.5078611000001122

    981624557.5078710000001020

    981624617.5078811000001122

    981624677.5078912000001224

    981624737.5079012000001224

    981624797.5079112000001224

    981624857.5079211000001122

    981624917.5079311000001122

    981624977.5079412000001224

    981625037.5079512000001224

    981625097.5079612000001224

    981625157.5079712000001224

    981625217.50798900000918

    981625277.5079912000001224

    981625337.50710012000001224

    981625397.50710111000001122

    981625457.50710212000001224

    981625517.50710312000001224

    981625577.50710411000001122

    981625637.50710512000001224

    981625697.50710612000001224

    981625757.50710711000001122

    981625817.50710812000001224

    981625877.50710911000001122

    981625937.50711011000001122

    981625997.50711111000001122

    981626057.50711211000001122

    981626117.507113900000918

    981626177.507114800000816

    981626237.507115600000612

    981626297.50711640000048

    981626357.50711730000036

    981626417.50711830000036

    981626477.50711900000000

    CostOptimise.Sched

    Condor-Monash

    Linux-Prosecco-CNR

    Linux-Barbera-CNR

    Solaris/Ultas2-TITech

    SGI-ISI

    Sun-ANL

    Time (in Minute)

    No. of Tasks in Execution

    TimeOptimise.Sched

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    Time (in Minute)

    Total No. of Tasks in Execution

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    Time (in Minute)

    Total Cost of CPUs in Use

    Time (in min.)Condor-MonashLinux-Prosecco-CNRLinux-Barbera-CNRSolaris/Ultas2-TITechSGI-ISISun-ANLTotal CPUsCost of Resources in Use

    981614355.749000000000

    981614415.7491410110722

    981614475.74926101201034

    981614535.74937101211243

    981614595.74949101311555

    981614655.749510101421872

    981614715.749610101521980

    981614775.74978001531780

    981614835.749810000441880

    981614895.749910000642096

    981614955.749101000011324129

    981615015.749111000010626142

    981615075.74912101019829154

    981615135.7491361139929163

    981615195.7491482139831163

    981615255.74915113137934163

    981615315.74916113133930131

    981615375.74917113131726101

    981615435.74918112131927112

    981615495.7491910310092396

    981615555.749209301092293

    981615615.749219202082186

    981615675.749229302092396

    981615735.749239302072182

    981615795.749249312052072

    981615855.749259222232075

    981615915.7492691327123102

    981615975.749271003111126130

    981616035.749281103011025122

    981616095.749291103011025122

    981616155.749301102011024118

    981616215.74931902010021106

    981616275.7493210020802092

    981616335.7493311021702189

    981616395.749341102211026124

    981616455.749351102311229141

    981616515.749361102311532162

    981616575.749371001311934184

    981616635.7493891039931165

    981616695.74939100035927132

    981616755.7494010002092189

    981616815.7494110001071872

    981616875.7494210000061662

    981616935.749439000071667

    981616995.7494410000071769

    981617055.7494510000071769

    981617115.7494610000071769

    981617175.7494710000071769

    981617235.7494810000041448

    981617295.749497000211037

    981617355.74950700000714

    981617415.74951700000714

    981617475.74952700000714

    981617535.74953700000714

    981617595.7495440000048

    981617655.7495510000012

    981617715.74956500000510

    981617775.74957500000510

    981617835.74958500000510

    981617895.74959500001617

    981617955.74960400013837

    981618015.74961400003729

    981618075.74962200003525

    981618135.74963300003627

    981618195.74964300003627

    981618255.74965300003627

    981618315.7496630000036

    981618375.74967300001413

    981618435.7496810000129

    981618495.7496910000129

    981618555.7497000000117

    981618615.7497100000117

    981618675.7497200000117

    981618735.7497300000000

    981618795.7497400000000

    981618855.7497500000000

    Condor-Monash

    Linux-Prosecco-CNR

    Linux-Barbera-CNR

    Solaris/Ultas2-TITech

    SGI-ISI

    Sun-ANL

    Time (in Minute)

    No. of Tasks in Execution

    Time (in Minute)

    Total No. of Tasks in Execution

    Time (in Minute)

    Total Cost of CPUs in Use

    *

    Resource Scheduling for DBC Cost Optimization

    Chart1

    000000

    511111

    811121

    911122

    1011132

    1111133

    1111144

    910144

    900033

    900032

    900022

    900011

    800010

    800000

    900000

    1100000

    1100000

    1100000

    900000

    1000000

    900000

    1100000

    1100000

    1100000

    900000

    1100000

    1000000

    1100000

    1100000

    1000000

    900000

    1100000

    1000000

    1200000

    1200000

    1200000

    1100000

    1100000

    1100000

    1200000

    1100000

    1100000

    1200000

    1200000

    1100000

    1200000

    1200000

    1200000

    1100000

    1100000

    1100000

    1200000

    1100000

    1200000

    1100000

    1100000

    1000000

    1000000

    1000000

    1200000

    1000000

    1200000

    1200000

    1200000

    1100000

    1000000

    1100000

    1000000

    1000000

    1100000

    1200000

    1000000

    1200000

    1200000

    1100000

    1100000

    1200000

    1200000

    1200000

    1200000

    1200000

    900000

    1200000

    1200000

    1100000

    1200000

    1100000

    1000000

    1100000

    1200000

    1200000

    1200000

    1100000

    1100000

    1200000

    1200000

    1200000

    1200000

    900000

    1200000

    1200000

    1100000

    1200000

    1200000

    1100000

    1200000

    1200000

    1100000

    1200000

    1100000

    1100000

    1100000

    1100000

    900000

    800000

    600000

    400000

    300000

    300000

    000000

    Condor-Monash

    Linux-Prosecco-CNR

    Linux-Barbera-CNR

    Solaris/Ultas2-TITech

    SGI-ISI

    Sun-ANL

    Time (in Minute)

    No. of Tasks in Execution

    ExperimentStat

    ResourceCost/CPU-secCost/JobJobs Run-Cost OptimisedJobs Run- Time Optimised Sched.Total Resource Cost(Optimise Cost)Total Resource Cost (Optimise Time)

    Condor-Monash2600153649180038400

    Linux-Prosecco-CNR3900179006300

    Linux-Barbera-CNR412001612007200

    Solaris/Ultas2-TITech3900199008100

    SGI-ISI824005371200088800

    Sun-ANL72100442840088200

    Total Experiment Cost115200237000

    Time Taken to Finish Experiment (in Min.)11970

    Experimental Data

    1. Deadline: 2 hrs

    2. No of Tasks: 165

    3. Each Task is Modelled to run for: 5 minute

    4. Budget =396000 (Grid $ units)

    Formulas:

    Cost/Job = Cost_per_CPU_sec * Task_Exec_Time_Minute*60

    Total Resource Cost = No. of Jobs Run * Cost_per_Job

    CostOptimise.Sched

    Time (in min.)Condor-MonashLinux-Prosecco-CNRLinux-Barbera-CNRSolaris/Ultas2-TITechSGI-ISISun-ANLTotal CPUsCost of Resources in Use

    981619337.507000000000

    981619397.50715111111035

    981619457.50728111211449

    981619517.50739111221658

    981619577.507410111321868

    981619637.507511111332077

    981619697.507611111442292

    981619757.50779101441984

    981619817.50789000331563

    981619877.50799000321456

    981619937.507109000221348

    981619997.507119000111133

    981620057.50712800010924

    981620117.50713800000816

    981620177.50714900000918

    981620237.5071511000001122

    981620297.5071611000001122

    981620357.5071711000001122

    981620417.50718900000918

    981620477.5071910000001020

    981620537.50720900000918

    981620597.5072111000001122

    981620657.5072211000001122

    981620717.5072311000001122

    981620777.50724900000918

    981620837.5072511000001122

    981620897.5072610000001020

    981620957.5072711000001122

    981621017.5072811000001122

    981621077.5072910000001020

    981621137.50730900000918

    981621197.5073111000001122

    981621257.5073210000001020

    981621317.5073312000001224

    981621377.5073412000001224

    981621437.5073512000001224

    981621497.5073611000001122

    981621557.5073711000001122

    981621617.5073811000001122

    981621677.5073912000001224

    981621737.5074011000001122

    981621797.5074111000001122

    981621857.5074212000001224

    981621917.5074312000001224

    981621977.5074411000001122

    981622037.5074512000001224

    981622097.5074612000001224

    981622157.5074712000001224

    981622217.5074811000001122

    981622277.5074911000001122

    981622337.5075011000001122

    981622397.5075112000001224

    981622457.5075211000001122

    981622517.5075312000001224

    981622577.5075411000001122

    981622637.5075511000001122

    981622697.5075610000001020

    981622757.5075710000001020

    981622817.5075810000001020

    981622877.5075912000001224

    981622937.5076010000001020

    981622997.5076112000001224

    981623057.5076212000001224

    981623117.5076312000001224

    981623177.5076411000001122

    981623237.5076510000001020

    981623297.5076611000001122

    981623357.5076710000001020

    981623417.5076810000001020

    981623477.5076911000001122

    981623537.5077012000001224

    981623597.5077110000001020

    981623657.5077212000001224

    981623717.5077312000001224

    981623777.5077411000001122

    981623837.5077511000001122

    981623897.5077612000001224

    981623957.5077712000001224

    981624017.5077812000001224

    981624077.5077912000001224

    981624137.5078012000001224

    981624197.50781900000918

    981624257.5078212000001224

    981624317.5078312000001224

    981624377.5078411000001122

    981624437.5078512000001224

    981624497.5078611000001122

    981624557.5078710000001020

    981624617.5078811000001122

    981624677.5078912000001224

    981624737.5079012000001224

    981624797.5079112000001224

    981624857.5079211000001122

    981624917.5079311000001122

    981624977.5079412000001224

    981625037.5079512000001224

    981625097.5079612000001224

    981625157.5079712000001224

    981625217.50798900000918

    981625277.5079912000001224

    981625337.50710012000001224

    981625397.50710111000001122

    981625457.50710212000001224

    981625517.50710312000001224

    981625577.50710411000001122

    981625637.50710512000001224

    981625697.50710612000001224

    981625757.50710711000001122

    981625817.50710812000001224

    981625877.50710911000001122

    981625937.50711011000001122

    981625997.50711111000001122

    981626057.50711211000001122

    981626117.507113900000918

    981626177.507114800000816

    981626237.507115600000612

    981626297.50711640000048

    981626357.50711730000036

    981626417.50711830000036

    981626477.50711900000000

    CostOptimise.Sched

    Condor-Monash

    Linux-Prosecco-CNR

    Linux-Barbera-CNR

    Solaris/Ultas2-TITech

    SGI-ISI

    Sun-ANL

    Time (in Minute)

    No. of Tasks in Execution

    TimeOptimise.Sched

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    Time (in Minute)

    Total No. of Tasks in Execution

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    Time (in Minute)

    Total Cost of CPUs in Use

    Time (in min.)Condor-MonashLinux-Prosecco-CNRLinux-Barbera-CNRSolaris/Ultas2-TITechSGI-ISISun-ANLTotal CPUsCost of Resources in Use

    981614355.749000000000

    981614415.7491410110722

    981614475.74926101201034

    981614535.74937101211243

    981614595.74949101311555

    981614655.749510101421872

    981614715.749610101521980

    981614775.74978001531780

    981614835.749810000441880

    981614895.749910000642096

    981614955.749101000011324129

    981615015.749111000010626142

    981615075.74912101019829154

    981615135.7491361139929163

    981615195.7491482139831163

    981615255.74915113137934163

    981615315.74916113133930131

    981615375.74917113131726101

    981615435.74918112131927112

    981615495.7491910310092396

    981615555.749209301092293

    981615615.749219202082186

    981615675.749229302092396

    981615735.749239302072182

    981615795.749249312052072

    981615855.749259222232075

    981615915.7492691327123102

    981615975.749271003111126130

    981616035.749281103011025122

    981616095.749291103011025122

    981616155.749301102011024118

    981616215.74931902010021106

    981616275.7493210020802092

    981616335.7493311021702189

    981616395.749341102211026124

    981616455.749351102311229141

    981616515.749361102311532162

    981616575.749371001311934184

    981616635.7493891039931165

    981616695.74939100035927132

    981616755.7494010002092189

    981616815.7494110001071872

    981616875.7494210000061662

    981616935.749439000071667

    981616995.7494410000071769

    981617055.7494510000071769

    981617115.7494610000071769

    981617175.7494710000071769

    981617235.7494810000041448

    981617295.749497000211037

    981617355.74950700000714

    981617415.74951700000714

    981617475.74952700000714

    981617535.74953700000714

    981617595.7495440000048

    981617655.7495510000012

    981617715.74956500000510

    981617775.74957500000510

    981617835.74958500000510

    981617895.74959500001617

    981617955.74960400013837

    981618015.74961400003729

    981618075.74962200003525

    981618135.74963300003627

    981618195.74964300003627

    981618255.74965300003627

    981618315.7496630000036

    981618375.74967300001413

    981618435.7496810000129

    981618495.7496910000129

    981618555.7497000000117

    981618615.7497100000117

    981618675.7497200000117

    981618735.7497300000000

    981618795.7497400000000

    981618855.7497500000000

    Condor-Monash

    Linux-Prosecco-CNR

    Linux-Barbera-CNR

    Solaris/Ultas2-TITech

    SGI-ISI

    Sun-ANL

    Time (in Minute)

    No. of Tasks in Execution

    Time (in Minute)

    Total No. of Tasks in Execution

    Time (in Minute)

    Total Cost of CPUs in Use

    *

    Nimrod-G SummaryOne of the first and most successful Grid Resource Brokers world-wide!Project continues to be active and being used in many e-Science applications.For recent developments, please see:http://messagelab.monash.edu.au/Nimrod

  • Gridbus BrokerDistributed Data-Intensive Application Scheduling

    *

    A Java-based resource broker for Data Grids (Nimrod-G focused on Computational Grids).It uses computational economy paradigm for optimal selection of computational and data services depending on their quality, cost, and availability, and users QoS requirements (deadline, budget, & T/C optimisation) Key FeaturesA single window to manage & control experimentProgrammable Task Farming EngineResource Discovery and Resource Trading Optimal Data Source DiscoveryScheduling & PredicationsGeneric Dispatcher & Grid AgentsTransportation of data & sharing of resultsAccountingGridbus Grid Service Broker (GSB)

    *

    Core MiddlewareGridbus User Console/Portal/Application InterfaceGrid Info ServerSchedule AdvisorTrading ManagerGridbus Farming EngineRecord KeeperGrid ExplorerGE GIS, NWSTM TSRM & TSGrid DispatcherGGCUGlobus enabled node.ALDataCatalogDataNodeAmazon EC2/S3 Cloud.$$$App, T, $, Optimization PreferenceworkloadGridbus Broker

    *

    Gridbus Broker: Separating applications from different remote service access enablers and schedulersData StoreAccess TechnologyGrid FTPSRBSingle-sign on securityApplication Development InterfaceScheduling InterfacesAlogorithm1AlogorithmNPlugin Actuators

    *

    Gridbus Services for eScience applicationsApplication Development Environment:XML-based language for composition of task farming (legacy) applications as parameter sweep applications.Task Farming APIs for new applications.Web APIs (e.g., Portlets) for Grid portal development.Threads-based Programming InterfaceWorkflow interface and Gridbus-enabled workflow engine. Grid Superscalar in cooperation with BSC/UPCResource Allocation and SchedulingDynamic discovery of optional computational and data nodes that meet user QoS requirements.Hide Low-Level Grid Middleware interfacesGlobus (v2, v4), SRB, Aneka, Unicore, and ssh-based access to local/remote resources managed by XGrid, PBS, Condor, SGE.

    *

    Drug DesignMade Easy!Click Here for Demo

    *

    s

  • A Sample List of Gridbus Broker Users

    http://www.gridbus.org

    High Energy Physics: Particle Discovery

    Melbourne University

    NeuroScience: Brain Activity Analysis

    EU Data Mining Grid

    DaimlerChrysler, Technion, U. Ljubljana, U. Ulster

    Kidney/Human Physiome Modelling

    Melbourne Medical Faculty, Universit d'Evry, France

    Finance /Investment Risk Studies: Spanish Stock Market

    Universidad Complutense de Madrid, Spain

    *

    Case Study: High Energy Physics and Data GridThe Belle ExperimentKEK B-Factory, JapanInvestigating fundamental violation of symmetry in nature (Charge Parity) which may help explain why do we have more antimatter in the universe OR imbalance of matter and antimatter in the universe?.Collaboration 1000 people, 50 institutes100s TB data currently

    *

    Case Study: Event Simulation and AnalysisB0->D*+D*-Ks Simulation and Analysis Package - Belle Analysis Software Framework (BASF) Experiment in 2 parts Generation of Simulated Data and Analysis of the distributed data

    Analyzed 100 data files (30MB each) that were distributed among the five nodes within Australian Belle DataGrid platform.

    *

    Australian Belle Data Grid TestbedVPAC Melbourne

    *

    Belle Data Grid (GSP CPU Service Price: G$/sec)NAG$4G$4Data nodeG$6VPAC MelbourneG$2

    *

    Belle Data Grid (Bandwidth Price: G$/MB)NAG$4G$4Data nodeG$6VPAC MelbourneG$23431383130333632

    *

    Deploying Application ScenarioA data grid scenario with 100 jobs and each accessing remote data of ~30MBDeadline: 3hrs.Budget: G$ 60KScheduling Optimisation Scenario:Minimise TimeMinimise CostResults:

    *

    Time Minimization in Data Grids

    *

    Results : Cost Minimization in Data Grids

    *

    Observation

    OrganizationNode detailsCost (in G$/CPU-sec)Total Jobs ExecutedTimeCostCS,UniMelb belle.cs.mu.oz.au 4 CPU, 2GB RAM, 40 GB HD, Linux N.A. (Not used as a compute resource)----Physics, UniMelbfleagle.ph.unimelb.edu.au 1 CPU, 512 MB RAM, 40 GB HD, Linux 2394CS, University of Adelaidebelle.cs.adelaide.edu.au4 CPU (only 1 available) , 2GB RAM, 40 GB HD, Linux N.A. (Not used as a compute resource)----ANU, Canberrabelle.anu.edu.au 4 CPU, 2GB RAM, 40 GB HD, Linux 422Dept of Physics, USydbelle.physics.usyd.edu.au 4 CPU (only 1 available), 2GB RAM, 40 GB HD, Linux 4722VPAC, Melbournebrecca-2.vpac.org180 node cluster (only head node used), Linux6232

    *

    Summary and ConclusionApplication scheduling on global Grids is a complex undertaking as systems need to be adaptive, scalable, competitive,, and driven by QoS.Nimrod-G is one of the popular Grid Resource Broker for scheduling parameter sweep applications on Global GridsScheduling experiments on the World Wide Grid demonstrate Nimrod-G broker ability to dynamically lease services at runtime based on their quality, cost, and availability depending on consumers QoS requirements. Easy to use tools for creating Grid applications are essential for success of Grid Computing.

    *

    ReferencesRajkumar Buyya, David Abramson, Jonathan Giddy, Nimrod/G: An Architecture for a Resource Management and Scheduling System in a Global Computational Grid, Proceedings of the 4th International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2000), Beijing, China. IEEE Computer Society Press, USA, 2000. David Abramson, Rajkumar Buyya, and Jonathan Giddy, A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Broker, Future Generation Computer Systems (FGCS) Journal, Volume 18, Issue 8, Pages: 1061-1074, Elsevier Science, The Netherlands, October 2002. Jennifer Schopf, Ten Actions When SuperScheduling, Global Grid Forum Document GFD.04, 2003. Srikumar Venugopal, Rajkumar Buyya and Lyle Winton, A Grid Service Broker for Scheduling e-Science Applications on Global Data Grids, Concurrency and Computation: Practice and Experience, Volume 18, Issue 6, Pages: 685-699, Wiley Press, New York, USA, May 2006.

    **************************************High Energy Physics (HEP) is the study of the fundamental constituents of matter and the forces between these constituents. It is called High Energy Physics as using high energies enables us to probe smaller distances and structures within matter, and also allows us to study matter as it was in the early universe, the history of matter. It is also called Particle Physics as we deal with quanta of matter and forces and the properties associated with these.The study of HEP is broken into two main disciplines, theoretical and experimental. Theoretical HEP propose theories and models to describe matter, forces, their properties, actions, and interactions. Experimental HEP construct experiments or detectors and accelerators to investigate matter interactions and behaviour under high energy conditions.Experimental HEP can be roughly broken into 3 separate activities. The boundaries of these activities, in time and responsibility, are often indistinct. The activities are the construction of detectors which typically takes many years, the measurement or collection of data, and the analysis of this data. We will focus on the using data grids for the analysis of data within HEP.*********