Top Banner
Supercomputing in Fluid Mechanics (Turbulence) Javier Jiménez ETSI Aeronáuticos Madrid Supercomputing ETSIA 2008
26

Supercomputing in Fluid Mechanics (Turbulence)

Feb 03, 2022

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Supercomputing in Fluid Mechanics (Turbulence)

Supercomputing in Fluid Mechanics (Turbulence)

Javier Jiménez

ETSI Aeronáuticos Madrid

Supercomputing ETSIA 2008

Page 2: Supercomputing in Fluid Mechanics (Turbulence)

1. Introduction to the problem

2. Present Status

3. Infrastructure

Page 3: Supercomputing in Fluid Mechanics (Turbulence)

Turbulence

Laminar Turbulent

Page 4: Supercomputing in Fluid Mechanics (Turbulence)

The effects of Turbulence

Pressure LossMixingDragEtc.

x 100

Page 5: Supercomputing in Fluid Mechanics (Turbulence)

Why Turbulence?

Energy (pressure loss)

Viscous dissipation

CASCADE

Page 6: Supercomputing in Fluid Mechanics (Turbulence)

Why Turbulence?

Energy (pressure loss)

Viscous dissipation

CASCADE

Energy (impact)

BREAKING

Surface tension

Page 7: Supercomputing in Fluid Mechanics (Turbulence)

The Computation of Turbulence

Degrees of Freedom(grid points)

PhysicalMODELS

of the CASCADE

Page 8: Supercomputing in Fluid Mechanics (Turbulence)

Boundary Layers, Pipes, etc.

Computing (Wall) Turbulence

(Simens, 2008)

Page 9: Supercomputing in Fluid Mechanics (Turbulence)

The Atmospheric Boundary Layer

Outer scale ~ 200 m

• Inner scale ~ 1 mm

Outer/Inner ~ 200,000

Page 10: Supercomputing in Fluid Mechanics (Turbulence)

Wall Turbulence can be Computed

•400 GB•1.2 TB/step•7M CPUh•2100 procs•6 months•25 TB raw data

Hoyas, Flores (2005)

Cascade range ≈ 10 !!

Page 11: Supercomputing in Fluid Mechanics (Turbulence)

Computers keep getting FASTER

Vector

Parallel

Cache

?

(x 2)/year

Page 12: Supercomputing in Fluid Mechanics (Turbulence)

What to do with Faster Computers

Do Bigger Things(higher Re)

Do the Same ThingsFASTER

Page 13: Supercomputing in Fluid Mechanics (Turbulence)

The State of the Art 2007

Channel Reτ=2000

(Hoyas, Flores)

Boundary Layer Reθ=2100

(Hoyas, Mizuno)

Reθ=1900 APG Boundary Layer

(Simens)

cascade

Page 14: Supercomputing in Fluid Mechanics (Turbulence)

The State of the Art 1987

•240 MB•250 CPUh•Cray XMP•1 month•4 GB raw data

Kim, Moin, Moser (1987)

Page 15: Supercomputing in Fluid Mechanics (Turbulence)

Computing Turbulence is GOODNear-Wall Turbulence in 1987

After KMM1987

Vortices, Jets, Layers ..

Before KMM1987

Streaks, Sweeps,Ejections ..

Page 16: Supercomputing in Fluid Mechanics (Turbulence)

Doing Same Things Faster

10 years

Heroic (Research)

Trivial (Industrial)1/

1000

Page 17: Supercomputing in Fluid Mechanics (Turbulence)

Computing the Viscous Layer (2001)1) Streamwise-velocity streaks + Streamwise vortices

2) A regeneration cycle

3) A steady nonlinear wave

Page 18: Supercomputing in Fluid Mechanics (Turbulence)

Computing the Viscous Layer (2001)1) Streamwise-velocity streaks + Streamwise vortices

2) A regeneration cycle

3) A steady nonlinear wavePostprocessing

gets thingsUNDERSTOOD

Page 19: Supercomputing in Fluid Mechanics (Turbulence)

“Postprocessing”

•Access !!! (Sharing)•Local or Distributed?

“Less” Respect

• Postprocessing = 2 x simulations• “Extra” simulations, statistics, ...

(and also graphics ...)• 5-10 years and “everywhere”• Storage (1KB/point) TBs PBs

Page 20: Supercomputing in Fluid Mechanics (Turbulence)

Numerics and Turbulence 2000

“Numerics”eddies, buffer

cycles, ...

“Experiments”log layer, cascades

interm., LES

data

den

sity

higher Reynolds

Reτ>2000

Reτ=590

Reτ=180 “SOLVED”

Page 21: Supercomputing in Fluid Mechanics (Turbulence)

Numerics and Turbulence 2010s

Overlap!

“Numerics and Experiments”log layer, cascades, interm., LES, ...

data

den

sity

higher ReynoldsReτ=5000

Reτ=2000 “SOLVED”

Page 22: Supercomputing in Fluid Mechanics (Turbulence)

• Things that have been computed tend to be understood within 10-15 years

• Computer centres Data centres

• In the next 10 years numerics and requirementswill converge for turbulence science

• Many questions of turbulence science (cascades, LES...)WILL then get “solved”

• Turbulence engineering can then begin seriously

Summary

Page 23: Supercomputing in Fluid Mechanics (Turbulence)

Computer Infrastructure• Supercomputers: Marenostrum, Cesvima

(the large simulations)

POSTPROCESSING

• Storage: 100 TB (easily accesible!!)

• Pre- y post-processing: 5-10% of the supercomputer(private!! 24/7)

Page 24: Supercomputing in Fluid Mechanics (Turbulence)

Supercomputing ETSIA 2004-07

Marenostrum & Cesvima

256-2100 CPUs

2-4 MCPUh/year

Page 25: Supercomputing in Fluid Mechanics (Turbulence)

Storage ETSIA

External (100 TB archive)•PIC (Barcelona) 10 CPUs•BSC (Barcelona) 256 CPUs

Internal (Etsia)40 TB

(30 permanent + 10 scratch) 15 CPUs

Page 26: Supercomputing in Fluid Mechanics (Turbulence)

Post-processing ETSIA

“Computing Clusters”