Top Banner
1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 [email protected] http://www.infomall.org http://www.grid2002.org
46

1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

Dec 16, 2015

Download

Documents

John Mills
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

11

25 years of High Performance Computing:

An Application PerspectiveJackson State University

Internet SeminarSeptember 28 2004

Geoffrey FoxComputer Science, Informatics, Physics

Pervasive Technology LaboratoriesIndiana University Bloomington IN 47401

[email protected]://www.infomall.org http://www.grid2002.org

Page 2: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

22

Personal Perspective I So I understood high performance computing in detail from

1980-1995• Hypercube, HPF, CRPC, Sundry Grand Challenges• I used HPT (Holes in Paper Tape) as an undergraduate

I summarized applications for the “Source Book of Parallel Computing” in 2002 and had 3 earlier books

I tried (and failed so far) to develop Java as a good high performance computing language from 1996-2003

My last grant in this area developed HPJava (http://www.hpjava.org) but it was small and ended in 2003

I have watched DoD scientists develop parallel code in their HPC Modernization Program

I have worked closely with NASA on Earthquake Simulations from 1997-now

Page 3: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

33

Personal Perspective II I have studied broadly requirements and best practice

biology and complex systems (e.g. critical infrastructure and network) simulations

Nearly all my research is nowadays in Grid (distributed) computing

I have struggled to develop computational science as an academic discipline

I taught classes in parallel computing and computational science to a dwindling audience with last ones in 1998 and 2000 with a new one next semester to JSU!

I read High End Computing Revitalization reports/discussions and remembered Petaflop meetings

I will discuss views from the “wilderness” of new users and new applications

Page 4: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

44

Some Impressions I Computational Science is highly successful with simulations in

1980 being “toy 2D explorations” and today we have full 3D multidisciplinary simulations with magnificent visualization• 128 node hypercube in 1983-5 had about 3 megaflop

performance but it did run at 80% efficiency• LANL told us about multigrid in 1984• Today’s highest end machines including Earth Simulator can

realize teraflop performance with 1-10% of peak speed• The whole talk can be devoted to descriptions of these

simulations and their visualizations Curiously Japan seems to view the Earth Simulator as a failure (it

didn’t predict anything special) whereas the US is using it as a motivation for several large new machines

Some industry has adopted HPC (oil, drug) but runs at modest capability and most of action is in capacity computing and embarrassingly parallel computations (finance, biotech)• Aerospace is in between

Page 5: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

55

Some Impressions II There is a group of users with HPC knowledge at their fingertips

who can use current hardware with great effectiveness and maximum realistic efficiency

I suspect in most fields, the knowledge of “average” users is at best an ability to use MPI crudely and their use of machines will be good only if they are wise enough to use good libraries like PetSc• From 1980-95, users were “early adopters” and willing to

endure great pain; today we need to support a broader range of mainstream users

• “strategy for HPC” is different for new users and new applications

Computer Science students (at universities I have been at) have little interest in algorithms or software for parallel computing

Increasing gulf between the Internet generation raised on Python and Java and the best tools (Fortran, C, C++) of HPC• Matlab and Mathematica represent another disparate

approach• Java Grande was meant to address this but failed

Page 6: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

66

Top10 HPC Machines June 2004

Page 7: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

77

Top500 from DongarraFlops

Page 8: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

88

Current HPC Systems

Page 9: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

99

Memory Bandwidth/FLOP

Characteristics of Commodity Networks

Page 10: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1010

Some Impressions III At 100,000ft situation today in HPC is not drastically

different from that expected around 1985• Simulations getting larger in size and sophistication• Move from regular to adaptive irregular data structures• Growing importance of multidisciplinary simulations• Perhaps Moore’s law has continued and will continue for

longer than expected• Computation reasonably respected as a science methodology

I expected more performance increase from explicit parallelism and less from more sophisticated chips • i.e. I expected all machines (PCs) to be (very) parallel and

software like Word to be parallel• I expected 105 to 106-way not 104 way parallelism in high end

supercomputers with nCUBE/Transputer/CM2 plus Weitek style architectures

Page 11: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1111

Some Impressions IV So parallel applications succeeded roughly as expected

but the manner was a little different As expected, essentially all scientific simulations could

be parallelized and the CS/Applied Math/Application community has developed remarkable algorithms • As noted many scientists unaware of them today and

some techniques like adaptive meshes and multipole methods are not easy to understand and use

• Field of parallel algorithms so successful that has almost put itself out of business

The parallel software model MPI is roughly the same as “mail box communication” system described say in 1980 memo by myself and Eugene Brookes (LLNL)• Even in 1980 we thought it pretty bad

Page 12: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1212

Why 106 Processors Should Work I? In days gone by, we derived formulae for overhead

Speed Up S = Efficiency ε . Number of Processors N ε = 1/( 1+f ) where f is Overhead f comes from communication cost, load imbalance and any

difficulties from parallel algorithm being ineffective f (communication) = Constant. tcomm/n1/d . tcalc This worked in 1982 and in analyses I have seen recently,

it still works

tcommtcalc tcalc

Page 13: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1313

Why 106 Processors Should Work II? f (communication) = Constant . tcomm/n1/d . tcalc

Here n is grain size and d (often 3) is (geometric) dimension.

So this supports scaled speed-up that efficiency constant if scale problem size nN proportional to number of processors N

As tcomm/tcalc is naturally independent of N, there seems no obvious reason that one can’t scale parallelism to large N for large problems at fixed n

However many applications complain about large N and say their applications won’t scale properly and demand the less cost effective architecture with fewer but more powerful nodes

Page 14: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1414

Some Impressions V I always thought of parallel computing as a map from an

application through a model to a computer I am surprised that modern HPC computer architectures do not

clearly reflect physical structure of most applications• After all Parallel Computing Works because Mother Nature and Society

(which we are simulating) are parallel GRAPE and earlier particle dynamics successfully match special

characteristics (low memory, communication bandwidth) of O(N2) algorithms.

Parallel ComputingWorks 1994

Of course vectors were introduced to reflect natural scientific data structures

Note Irregular problems still have geometrical structure even if no constant stride long vectors

I think mismatch between hardware and problemarchitecture reflects software (Languages)

Page 15: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

Computational Science 15

Seismic Simulation of Los Angeles Basin• This is (sophisticated) wave equation similar to Laplace example

and you divide Los Angeles geometrically and assign roughly equal number of grid points to each processor

Computer with4 Processors

Problem represented byGrid Points and divided

Into 4 Domains

Page 16: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

Computational Science 16

Irregular 2D Simulation -- Flow over an Airfoil• The Laplace grid

points become finite element mesh nodal points arranged as triangles filling space

• All the action (triangles) is near near wing boundary

• Use domain decomposition but no longer equal area as equal triangle count

Page 17: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1717

BlueGene/L has ArchitectureI expected to be natural

STOP PRESS:16000 node BlueGene/L

regains #1 TOP500Position 29 Sept 2004

Page 18: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1818

BlueGene/L Fundamentals Low Complexity nodes gives more flops per transistor and per

watt 3D Interconnect supports many scientific simulations as nature

as we see it is 3D

Page 19: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

1919

1024 Nodes full systemwith hypercube Interconnect

1987 MPP

Page 20: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2020

Prescott has 125 Million TransistorsCompared to Ncube

100X Clock500X Density

50000X Potential Peak Performance Improvement

Probably more like 1000X

Realized PerformanceImprovement

Page 21: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2121

1993 Top500

1993-2004Number of Nodes 4XPerformance 1000X

Page 22: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2222

Performance per Transistor

Page 23: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2323

Some Impressions VI Two key features of today’s applications

• Is the simulation built on fundamental equations or phenomenological (coarse grained) degrees of freedom

• Is the application deluged with interesting data Most of HPCC activity 1990-2000 dealt with applications like

QCD, CFD, structures, astrophysics, quantum chemistry, material science, neutron transport where reasonably accurate information available to describe basic degrees of freedom

Classic model is to set up numerics of “well established equations” (e.g. Navier Stokes) and solve with known boundary values and initial conditions

Many interesting applications today have unknown boundary conditions, initial conditions and equations• They have a lot of possibly streaming data instead

For this purpose, the goal of Grid technology is to manage the experimental data

Page 24: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2424

Data Deluged Science In the past, we worried about data in the form of parallel I/O or

MPI-IO, but we didn’t consider it as an enabler of new algorithms and new ways of computing

Data assimilation was not central to HPCC ASC set up because didn’t want test data! Now particle physics will get 100 petabytes from CERN

• Nuclear physics (Jefferson Lab) in same situation

• Use around 30,000 CPU’s simultaneously 24X7

Weather, climate, solid earth (EarthScope) Bioinformatics curated databases (Biocomplexity only 1000’s of

data points at present) Virtual Observatory and SkyServer in Astronomy Environmental Sensor nets

Page 25: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

25

Weather Requirements

Page 26: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

Data

Information

Ideas

Simulation

Model

Assimilation

Reasoning

Datamining

ComputationalScience

Informatics

Data DelugedScienceComputingParadigm

Page 27: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2727

Virtual Observatory Astronomy GridIntegrate Experiments

Radio Far-Infrared Visible

Visible + X-ray

Dust Map

Galaxy Density Map

Page 28: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

2828

In flight data

Airline

Maintenance Centre

Ground Station

Global NetworkSuch as SITA

Internet, e-mail, pager

Engine Health (Data) Center

DAME Data Deluged Engineering

Rolls Royce and UK e-Science ProgramDistributed Aircraft Maintenance Environment

~ Gigabyte per aircraft perEngine per transatlantic flight

~5000 engines

Page 29: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

29

USArray

Seismic

Sensors

Page 30: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

30

a

Topography1 km

Stress Change

Earthquakes

PBO

Site-specific IrregularScalar Measurements Constellations for Plate

Boundary-Scale Vector Measurements

aaIce Sheets

Volcanoes

Long Valley, CA

Northridge, CA

Hector Mine, CA

Greenland

Page 31: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

31

Database Database

ResearchSimulations

Analysis and VisualizationPortal

RepositoriesFederated Databases

Data Filter

Services

Field Trip DataStreaming Data

Sensors

?DiscoveryServices

SERVOGridResearch Education

CustomizationServices

From Research

to Education

EducationGrid ComputerFarm

Geoscience Research andEducation Grids

Page 32: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3232

SERVOGrid Requirements Seamless Access to Data repositories and large scale

computers Integration of multiple data sources including sensors,

databases, file systems with analysis system• Including filtered OGSA-DAI (Grid database access)

Rich meta-data generation and access with SERVOGrid specific Schema extending openGIS (Geography as a Web service) standards and using Semantic Grid

Portals with component model for user interfaces and web control of all capabilities

Collaboration to support world-wide work Basic Grid tools: workflow and notification NOT metacomputing

Page 33: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3333

Non Traditional Applications: Biology At a fine scale we have molecular dynamics (protein folding) and

at the coarsest scale CFD (e.g. blood flow) and structures (body mechanics)

A lot of interest is in between these scales with• Genomics: largely pattern recognition or data mining• Subcellular structure: reaction kinetics, network structure• Cellular and above (organisms, biofilms) where cell structure matters:

Cellular Potts Model• Continuum Organ models, blood flow etc.• Neural Networks

Cells, Neurons, Genes are structures defining biology simulation approach in same way boundary layer defines CFD

Data mining can be considered as a special case of a simulation where model is “pattern to be looked for” and data set determines “dynamics” (where the pattern is)

Page 34: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3434

Non Traditional Applications: Earthquakes We know the dynamics at a coarse level (seismic wave

propagation) and somewhat at a fine scale (granular physics for friction)

Unknown details of constituents and sensitivity of phase transitions (earthquakes) to detail, make it hard to use classical simulation methods to forecast earthquakes

Data deluge (Seismograms, dogs barking, SAR) again does not directly tell you needed friction laws etc. needed for classic simulations

Approaches like “pattern informatics” combine data mining with simulation• One is looking for “dynamics” of “earthquake signals” to see

if the “big one” preceded by a certain structure in small quakes or other phenomenology

Page 35: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3535

Computational Infrastructure for“Old and New” Applications

The Earth Simulator’s performance and continued leadership of TOP500 has prompted several US projects to retain the number 1 position

There are scientific fields (Cosmology, Quantum Chromodynamics) which need very large machines

There are “national security” applications (such as stockpile stewardship) who also need the biggest machines available

However much of most interesting science does not• The “new” fields just need more work to be able to run high

resolution studies• Scientists typically prefer several “small” (100-500 node)

clusters with dedicated “personal” management to single large machines

Page 36: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3636

High-End Revitalization and the Crusades RADICAL CHANGE MAY NOT BE THE ANSWER:

RESPONDING TO THE HEC HPCwire 108424 September 24, 2004

Dear High-End Crusader ….. There are various efforts aimed at revitalizing HPC

• http://www.cra.org/Activities/workshops/nitrd/ Some involve buying large machines Others are investigating new approaches to HPC

• In particular addressing embarrassing software problem• Looking at new architectures

I have not seen many promising new software ideas• We cannot it seems express parallelism in a way that is

convenient for users and easy to implement I think hardware which is simpler and has more flops/transistor

will improve performance of several key science areas but not all

Page 37: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3737

Non Traditional Applications: Critical Infrastructure Simulations

These include electrical/gas/water grids and Internet, transportation, cell/wired phone dynamics.

One has some “classic SPICE style” network simulations in area like power grid (although load and infrastructure data incomplete)• 6000 to 17000 generators

• 50000 to 140000 transmission lines

• 40000 to 100000 substations Substantial DoE involvement

through DHS

Page 38: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3838

Non Traditional Applications: Critical Infrastructure Simulations

Activity data for people/institutions essential for detailed dynamics but again these are not “classic” data but need to be “fitted” in data assimilation style in terms of some assumed lower level model.• They tell you goals of people but not their low level movement

Disease and Internet virus spread and social network simulations can be built on dynamics coming from infrastructure simulations• Many results like “small world” internet connection structure

are qualitative and unclear if they can be extended to detailed simulations

• A lot of interest in (regulatory) networks in Biology

Page 39: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

3939

(Non) Traditional Structure 1) Traditional: Known equations plus boundary values 2) Data assimilation: somewhat uncertain initial conditions and

approximations corrected by data assimilation 3) Data deluged Science: Phenomenological degrees of freedom

swimming in a sea of dataKnown Data

PredictionKnown

Equations onAgreed DoF

PhenomenologicalDegrees of Freedom

Swimming in a Sea of Data

Page 40: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4040

HPCSimulation

DataFilter

Data FilterD

ata

Filt

er

Data

Filter

Data

Filter

Distributed Filters massage dataFor simulation

Other

Grid

and W

eb

Servi

ces

AnalysisControl

Visualize

Data Deluged ScienceComputing Architecture

Grid

OGSA-DAIGrid Services

Grid Data Assimilation

Page 41: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4141

Data Assimilation Data assimilation implies one is solving some optimization

problem which might have Kalman Filter like structure

Due to data deluge, one will become more and more dominated by the data (Nobs much larger than number of simulation points).

Natural approach is to form for each local (position, time) patch the “important” data combinations so that optimization doesn’t waste time on large error or insensitive data.

Data reduction done in natural distributed fashion NOT on HPC machine as distributed computing most cost effective if calculations essentially independent • Filter functions must be transmitted from HPC machine

2 2

1

min ( , ) _obsN

i iTheoretical Unknownsi

Data position time Simulated Value Error

Page 42: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4242

Distributed Filtering

HPC Machine

Distributed Machine

Data FilterNobslocal patch 1

Nfilteredlocal patch 1

Data FilterNobslocal patch 2

Nfilteredlocal patch 2

GeographicallyDistributedSensor patches

Nobslocal patch >> Nfiltered

local patch ≈ Number_of_Unknownslocal patch

Send needed FilterReceive filtered data

In simplest approach, filtered data gotten by linear transformations on original data based on Singular Value Decomposition of Least squares matrix

Factorize Matrixto product oflocal patches

Page 43: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4343

Some Questions for Non Traditional Applications

No systematic study of how best to represent data deluged sciences without known equations

Obviously data assimilation very relevant Role of Cellular Automata (CA) and refinements like the New

Kind of Science by Wolfram• Can CA or Potts model parameterize any system?

Relationship to back propagation and other neural network representations

Relationship to “just” interpolating data and then extrapolating a little

Role of Uncertainty Analysis – everything (equations, model, data) is uncertain!

Relationship of data mining and simulation A new trade-off: How to split funds between sensors and

simulation engines

Page 44: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4444

Some Impressions VII My impression is that the knowledge of how to use HPC

machines effectively is not broadly distributed• Many current users less sophisticated than you were in 1981

Most simulations are still performed on sequential machines with approaches that make it difficult to parallelize• Code has to be re-engineered to use MPI – major “productivity”

The parallel algorithms in new areas are not well understood even though they are probably similar to those already developed• Equivalent of multigrid (multiscale) not used – again mainly due to

software engineering issues – it’s too hard

Trade-off between time stepped and event driven simulations not well studied for new generation of network (critical infrastructure) simulations.

Page 45: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4545

Some Impressions VIII I worked on Java Grande partly for obvious possible advantages

of Java over Fortran/C++ as a language but also so HPC could better leverage the technologies and intellectual capital of the Internet generation

I still think HPC will benefit from A) Building environments similar to those in the Internet world

• Why would somebody grow up using Internet goodies and then switch to Fortran and MPI for their “advanced” work

B) Always asking when to use special HPC and when commodity software/architectures can be used• Python often misused IMHO and standards like HPF, MPI

don’t properly discuss hybrid HPC/Commodity systems and their relation

• The rule of the Millisecond I still think new languages (or dialects) that bridge

simulation and data, HPC and commodity world are useful

Page 46: 1 25 years of High Performance Computing: An Application Perspective Jackson State University Internet Seminar September 28 2004 Geoffrey Fox Computer.

4646

Interaction of Commodity and HPC Software and Services

Using commodity hardware or software obviously• Saves money and• Broadens community that can be involved e.g. base parallel language on

Java or C# to involve the Internet generation Technologies roughly divide by communication latency

• I can get high bandwidth in all cases?• e.g. Web Services and SOAP can use GridFTP and parallel streams as

well as slow HTTP protocols >1 millisecond latency: message based services 10-1000 microseconds: method based scripting 1-20 microseconds: MPI

• Often used when there are better solutions < 1 microsecond: inlining, optimizing compilers etc.? To maximize re-use and eventual productivity, use the approach

with highest acceptable latency• Only 10% of code is the HPC part?