Top Banner
101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design (NNFM) Editors W. Schröder/Aachen K. Fujii/Kanagawa W. Haase/München E.H. Hirschel/München B. van Leer/Ann Arbor M.A. Leschziner/London M. Pandolfi/Torino J. Periaux/Paris A. Rizzi/Stockholm B. Roux/Marseille Y. Shokin/Novosibirsk
27

101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Jun 11, 2018

Download

Documents

hathu
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design (NNFM)

EditorsW. Schröder/Aachen

K. Fujii/KanagawaW. Haase/München

E.H. Hirschel/MünchenB. van Leer/Ann Arbor

M.A. Leschziner/LondonM. Pandolfi/Torino

J. Periaux/ParisA. Rizzi/StockholmB. Roux/Marseille

Y. Shokin/Novosibirsk

Page 2: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Computational Scienceand High PerformanceComputing IIIThe 3rd Russian-German AdvancedResearch Workshop, Novosibirsk, Russia,23–27 July 2007

Egon KrauseYurii I. ShokinMichael ReschNina Shokina(Editors)

ABC

Page 3: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Prof. Egon KrauseAerodynamisches InstitutRWTH AachenWuellnerstr. zw. 5 u. 752062 AachenGermany

Prof. Yurii I. ShokinAcademician of Russian Academy of SciencesInstitute of Computational Technologies ofSB RASAc. Lavrentyev Ave. 6630090 NovosibirskRussia

Prof. Michael ReschHigh Performance Computing CenterStuttgartUniversity of StuttgartNobelstrasse 1970569 StuttgartGermany

Dr. Nina ShokinaHigh Performance Computing CenterStuttgartUniversity of StuttgartNobelstrasse 1970569 StuttgartGermany

ISBN 978-3-540-69008-5 e-ISBN 978-3-540-69010-8

DOI 10.1007/978-3-540-69010-8

Notes on Numerical Fluid Mechanicsand Multidisciplinary Design ISSN 1612-2909

Library of Congress Control Number: 2008928447

c©2008 Springer-Verlag Berlin Heidelberg

This work is subject to copyright. All rights are reserved, whether the whole or part of the material isconcerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication orparts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, inits current version, and permission for use must always be obtained from Springer. Violations are liable forprosecution under the German Copyright Law.

The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,even in the absence of a specific statement, that such names are exempt from the relevant protective laws andregulations and therefore free for general use.

Typeset & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India.

Printed in acid-free paper

5 4 3 2 1 0

springer.com

Page 4: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

NNFM Editor Addresses

Prof. Dr. Wolfgang Schröder(General Editor)RWTH AachenLehrstuhl für Strömungslehre undAerodynamisches InstitutWüllnerstr. zw. 5 u. 752062 AachenGermanyE-mail: [email protected]

Prof. Dr. Kozo FujiiSpace Transportation Research DivisionThe Institute of Spaceand Astronautical Science3-1-1, Yoshinodai, SagamiharaKanagawa, 229-8510JapanE-mail: [email protected]

Dr. Werner HaaseHöhenkirchener Str. 19dD-85662 HohenbrunnGermanyE-mail: [email protected]

Prof. Dr. Ernst Heinrich HirschelHerzog-Heinrich-Weg 6D-85604 ZornedingGermanyE-mail: [email protected]

Prof. Dr. Bram van LeerDepartment of Aerospace EngineeringThe University of MichiganAnn Arbor, MI 48109-2140USAE-mail: [email protected]

Prof. Dr. Michael A. LeschzinerImperial College of ScienceTechnology and MedicineAeronautics DepartmentPrince Consort RoadLondon SW7 2BYU.K.E-mail: [email protected]

Prof. Dr. Maurizio PandolfiPolitecnico di TorinoDipartimento di IngegneriaAeronautica e SpazialeCorso Duca degli Abruzzi, 24I-10129 TorinoItalyE-mail: [email protected]

Prof. Dr. Jacques Periaux38, Boulevard de ReuillyF-75012 ParisFranceE-mail: [email protected]

Prof. Dr. Arthur RizziDepartment of AeronauticsKTH Royal Institute of TechnologyTeknikringen 8S-10044 StockholmSwedenE-mail: [email protected]

Dr. Bernard RouxL3M – IMT La JetéeTechnopole de Chateau-GombertF-13451 Marseille Cedex 20FranceE-mail: [email protected]

Prof. Dr. Yurii I. ShokinSiberian Branch of theRussian Academy of SciencesInstitute of ComputationalTechnologiesAc. Lavrentyeva Ave. 6630090 NovosibirskRussiaE-mail: [email protected]

Page 5: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Preface

This volume is published as the proceedings of the third Russian-German Ad-vanced Research Workshop on Computational Science and High PerformanceComputing in Novosibirsk, Russia, in July 2007.

The contributions of these proceedings were provided and edited by the au-thors, chosen after a careful selection and reviewing.

The workshop was organized by the High Performance Computing CenterStuttgart (Stuttgart, Germany) and the Institute of Computational TechnologiesSBRAS(Novosibirsk,Russia) in the frameworkofactivities of theGerman-RussianCenter for Computational Technologies and High Performance Computing.

Thee event is held biannually and has already become a good tradition forGerman and Russian scientists. The first Workshop took place in September2003 in Novosibirsk and the second Workshop was hosted by Stuttgart in March2005. Both workshops gave the possibility of sharing and discussing the latestresults and developing further scientific contacts in the field of computationalscience and high performance computing.

The topics of the current workshop include software and hardware for highperformance computation, numerical modelling in geophysics and computationalfluid dynamics, mathematical modelling of tsunami waves, simulation of fuelcells and modern fibre optics devices, numerical modelling in cryptography prob-lems and aeroacoustics, interval analysis, tools for Grid applications, research onservice-oriented architecture (SOA) and telemedicine technologies.

The participation of representatives of major research organizations engaged inthe solution of the most complex problems of mathematical modelling, develop-ment of new algorithms, programs and key elements of information technologies,elaboration and implementation of software and hardware for high performancecomputing systems, provided a high level of competence of the workshop.

Among the German participants were the heads and leading specialists of theHigh PerformanceComputing Center Stuttgart (HLRS) (University of Stuttgart),NEC High PerformanceComputing EuropeGmbH, Section ofApplied Mathemat-ics (University of Freiburg i. Br.), Institute ofAerodynamics (RWTH Aachen), Re-gional Computing Center Erlangen (RRZE (University of Erlangen-Nuremberg),Center for High Performance Computing (ZHR) (Dresden University ofTechnology).

Among the Russian participants were researchers of the Institutes of theSiberian Branch of the Russian Academy of Sciences (SB RAS): Institute ofComputational Technologies SB RAS (Novosibirsk), Institute of ComputationalModelling SB RAS (Krasnoyarsk), Lavrentyev Institute of Hydrodynamics SBRAS (Novosibirsk), Institute of Computational Mathematics and MathematicalGeophysics SB RAS (Novosibirsk), Institute for System Dynamics and Control

Page 6: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

VIII Preface

Theories SB RAS (Irkutsk). The scientists from the following universities par-ticipated also: Siberian State University of Telecommunications and ComputerScience (Novosibirsk), Altai State University (Barnaul), Novosibirsk State Med-ical University (Novosibirsk).

This volume provides state-of-the-art scientific papers, presenting the latestresults of the leading German and Russian institutions.

We are very glad to see the live tradition of cooperation and successful con-tinuation of the highly professional international scientific meetings.

The editors would like to express their gratitude to all the participants of theworkshop and wish them a further successful and fruitful work.

Novosibirsk-Stuttgart, Egon KrauseDecember 2007 Yurii Shokin

Michael ReschNina Shokina

Page 7: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Table of Contents

Computing Facility of the Institute of ComputationalTechnologies SB RASYu.I. Shokin, M.P. Fedoruk, D.L. Chubarov, A.V. Yurchenko . . . . . . . . . . . 1

HPC in Industrial EnvironmentsM.M. Resch, U. Kuster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Parallel Realization of Mathematical Modelling ofElectromagnetic Logging Processes Using VIKIZ ProbeComplexV.N. Eryomin, S. Haberhauer, O.V. Nechaev, N. Shokina,E.P. Shurina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Numerical Solution of Some Direct and Inverse MathematicalProblems for Tidal FlowsV.I. Agoshkov, L.P. Kamenschikov, E.D. Karepova, V.V. Shaidurov . . . . . 31

Hardware Development and Impact on Numerical AlgorithmsU. Kuster, M.M. Resch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Mathematical Modeling in Application to Regional TsunamiWarning Systems OperationsYu.I. Shokin, V.V. Babailov, S.A. Beisel, L.B. Chubarov, S.V. Eletsky,Z.I. Fedotova, V.K. Gusiakov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Parallel and Adaptive Simulation of Fuel Cells in 3dR. Klofkorn, D. Kroner, M. Ohlberger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Numerical Modeling of Some Free Turbulent FlowsG.G. Chernykh, A.G. Demenkov, A.V. Fomina, B.B. Ilyushin,V.A. Kostomakha, N.P. Moshkin, O.F. Voropayeva . . . . . . . . . . . . . . . . . . . 82

Mathematical and Numerical Modelling of Fluid Flow inElastic TubesE. Bansch, O. Goncharova, A. Koop, D. Kroner . . . . . . . . . . . . . . . . . . . . . . 102

Parallel Numerical Modeling of Modern Fibre Optics DevicesL.Yu. Prokopyeva, Yu.I. Shokin, A.S. Lebedev, O.V. Shtyrina,M.P. Fedoruk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Zonal Large-Eddy Simulations and Aeroacoustics of High-LiftAirfoil ConfigurationsM. Meinke, D. Konig, Q. Zhang, W. Schroder . . . . . . . . . . . . . . . . . . . . . . . . 136

Page 8: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

X Table of Contents

Experimental Statistical Attacks on Block and Stream CiphersS. Doroshenko, A. Fionov, A. Lubkin, V. Monarev, B. Ryabko,Yu.I. Shokin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

On Performance and Accuracy of Lattice BoltzmannApproaches for Single Phase Flow in Porous Media: A ToyBecame an Accepted Tool—How to Maintain Its FeaturesDespite More and More Complex (Physical) Models andChanging Trends in High Performance Computing!?T. Zeiser, J. Gotz, M. Sturmer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Parameter Partition Methods for Optimal Numerical Solutionof Interval Linear SystemsS.P. Shary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

Comparative Analysis of the SPH and ISPH MethodsK.E. Afanasiev, R.S. Makarchuk, A.Yu. Popov . . . . . . . . . . . . . . . . . . . . . . . 206

SEGL: A Problem Solving Environment for the Design andExecution of Complex Scientific Grid ApplicationsN. Currle-Linde, M.M. Resch, U. Kuster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

A Service-Oriented Architecture for Some Problems ofMunicipal Management (Example of the City of IrkutskMunicipal Administration)I.V. Bychkov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238

Basic Tendencies of the Telemedicine TechnologiesDevelopment in Siberian RegionA.V. Efremov, A.V. Karpov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

Page 9: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

List of Contributors

K.E. AfanasievKemerovo State UniversityKrasnaya str. 6Kemerovo, 650043, [email protected]

V.I. AgoshkovInstitute of Numerical MathematicsRASGubkina st. 8Moscow, 119991, [email protected]

V.V. BabailovInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

E. BanschInstitute of Applied Mathematics III,University of Erlangen-NurembergHaberstr. 2Erlangen, 91058, [email protected]

S.A. BeiselInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, Russiabeisel [email protected]

I.V. BychkovInstitute for System Dynamics andControl Theory SB RASLermontov str. 134Irkutsk, 664033, [email protected]

G.G. ChernykhInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

D.L. ChubarovInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

L.B. ChubarovInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

N. Currle-LindeHigh Performance Computing CenterStuttgart (HLRS), University ofStuttgartNobelstraße 19Stuttgart, 70569, [email protected]

A.G. DemenkovS.S. Kutateladze Institute of Thermo-physics SB RASLavrentiev Ave. 1Novosibirsk, 630090, [email protected]

S.A. DoroshenkoSiberian State University of Telecom-municationsand Computer ScienceKirova str. 86

Page 10: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

XII List of Contributors

Novosibirsk, 630102, [email protected]

S.V. EletskyInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

A.V. EfremovNovosibirsk State Medical UniversityKrasny Ave. 52Novosibirsk, 630099, [email protected]

V.N. EryominScientific Production Enterprise ofGeophysical Equipment ”Looch”Geologicheskaya Str. 49Novosibirsk, 630010, [email protected]

M.P. FedorukInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

Z.I. FedotovaInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

A.N. FionovSiberian State University of Telecom-municationsand Computer ScienceKirova str. 86Novosibirsk, 30111, [email protected]

A.V. FominaKuzbass State Academy of PedagogyPionerskii Ave. 13Novokuznetsk, 654066, [email protected]

O.N. GoncharovaAltai State Universitypr. Lenina 61Barnaul, 656049 RussiaM.A. Lavrentiev Institute of Hydro-dynamics SB RASLavrentiev Ave. 15Novosibirsk, 630090, [email protected]

J. GotzChair for System Simulation,,University of Erlangen-Nuremberg,Cauerstraße 6, Erlangen, 91058,[email protected]

V.K. GusyakovInstitute of Computational Mathe-matics and Mathematical GeophysicsSB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

S. HaberhauerNEC - High Performance ComputingEurope GmbHNobelstraße 19Stuttgart, 70569, [email protected]

B.B. IlyushinS.S. Kutateladze Institute of Thermo-physics SB RASLavrentiev Ave. 1Novosibirsk, 630090, [email protected]

Page 11: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

List of Contributors XIII

L.P. KamenshchikovInstitute of Computational ModellingSB RASAcademgorodok, Krasnoyarsk,660036, [email protected]

E.D. KarepovaInstitute of Computational ModellingSB RASAcademgorodok, Krasnoyarsk,660036, [email protected]

A.V. KarpovNovosibirsk State Medical UniversityKrasny Ave. 52Novosibirsk, 630099, RussiaInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

R. KlofkornSection of Applied Mathematics,University of Freiburg i. Br.Hermann-Herder-Straße 10Freiburg i. Br., 79104, [email protected]

A. KoopSternenberg 19Wuppertal, 42279, [email protected]

D. KonigInstitute of AerodynamicsRWTH AachenWuellnerstraße zw. 5 u.7Aachen, 52062, [email protected]

V.A. KostomakhaM.A. Lavrentiev Institute of Hydro-dynamics SB RASLavrentiev Ave. 15Novosibirsk, 630090, [email protected]

D. KronerSection of Applied Mathematics,University of Freiburg i. Br.Hermann-Herder-Straße 10Freiburg i. Br., 79104, [email protected]

U. KusterHigh Performance Computing CenterStuttgart (HLRS), University ofStuttgartNobelstraße 19Stuttgart, 70569, [email protected]

A.S. LebedevInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

A.M. LubkinSiberian State University of Telecom-municationsand Computer ScienceKirova str. 86Novosibirsk, 630102, [email protected]

R.S. MakarchukKemerovo State UniversityKrasnaya str. 6Kemerovo, 650043, [email protected]

M. MeinkeInstitute of AerodynamicsRWTH Aachen

Page 12: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

XIV List of Contributors

Wuellnerstraße zw. 5 u.7Aachen, 52062, [email protected]

V.A. MonarevInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

N.P. MoshkinInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

O.V. NechaevTrofimuk Institute of PetroleumGeology and Geophysics SB RASKoptyug Ave. 3, Novosibirsk, 630090,[email protected]

M. OhlbergerInstitute for Numerical and AppliedMathematics, University of MunsterEinsteinstraße 62Munster 48149 [email protected]

A.Yu. PopovKemerovo State UniversityKrasnaya str. 6Kemerovo, 650043, Russiaa [email protected]

L.Yu. ProkopyevaInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

M. ReschHigh Performance Computing CenterStuttgart (HLRS), University ofStuttgartNobelstraße 19Stuttgart, 70569, [email protected]

B.Ya. RyabkoInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

W. SchroderInstitute of AerodynamicsRWTH AachenWuellnerstraße zw. 5 u.7Aachen, 52062, [email protected]

V.V. ShaidurovInstitute of Computational ModellingSB RASAcademgorodokKrasnoyarsk, 660036, [email protected]

S.P. SharyInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

Yu.I. ShokinInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

N.Yu. ShokinaHigh Performance Computing CenterStuttgart (HLRS), University ofStuttgart

Page 13: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

List of Contributors XV

Nobelstraße 19Stuttgart, 70569, [email protected]

M. SturmerChair for System Simulation,,University of Erlangen-Nuremberg,Cauerstraße 6, Erlangen, 91058,[email protected]

O.V. ShtyrinaInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

E.P. ShurinaInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, RussiaNovosibirsk State Technical UniversityK. Marx Ave. 20Novosibirsk, 630092, [email protected]

O.F. VoropayevaInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

A.V. YurchenkoInstitute of Computational Technolo-gies SB RASLavrentiev Ave. 6Novosibirsk, 630090, [email protected]

T. ZeiserRegional Computing Center Erlangen,University of Erlangen-Nuremberg,Martensstraße 1, Erlangen, 91058,[email protected]

Q. ZhangInstitute of AerodynamicsRWTH AachenWuellnerstraße zw. 5 u.7Aachen, 52062, [email protected]

Page 14: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Computing Facility of the Institute of

Computational Technologies SB RAS

Yu.I. Shokin�, M.P. Fedoruk��, D.L. Chubarov��, and A.V. Yurchenko��

Institute of Computational Technologies SB RAS, Lavrentiev Ave. 6,630090 Novosibirsk, Russia

{shokin,mife,dchubarov,yurchenko}@ict.nsc.ru

1 Introduction

The performance gap between supercomputers and personal workstations isgrowing. Yet supercomputing resources are scarce and carefully managed. Gain-ing access to a leading national supercomputing centre requires a considerableamount of work comparable to the amount of work needed to get a paper pub-lished in a leading scientific journal. This raises the importance of centralisedcomputing resources on the level of a single organization.

We can outline several principles guiding the development of high performancecomputing on the scale of an academic organization.

Ease of access. Every user within the organization should get a level of accessthat is as close as possible to their needs. This is always a compromise betweenthe needs of different users.

Training. Since most of the users within the organization are not high per-formance computing professionals, they need information on the existing tech-nologies and latest trends in the development of high performance computingworldwide.

Development facilities. The specifics of an academic organization is that a bigproportion of the codes running on high performance computing systems is de-veloped within the organization.

Applications. The development of computing systems should follow the demandscoming from applications.

In the rest of this paper we show how these principles were implemented withinthe integrated data processing environment of the Institute of ComputationalTechnologies of the Siberian Branch of the Russian Academy of Sciences (ICTSB RAS).

� The work of Yu.I. Shokin is supported by grants from the Russian Foundation forBasic Research RFBR 06-07-03023, RFBR 06-07-01820.

�� The work of M.P. Fedoruk, D.L. Chubarov, A.V. Yurchenko is supported by FederalAgency for Education within the framework of Research Programme 1.15.07.

E. Krause et al. (Eds.): Comp. Science & High Perf. Computing III, NNFM 101, pp. 1–7, 2008.springerlink.com c© Springer-Verlag Berlin Heidelberg 2008

Page 15: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

2 Yu.I. Shokin et al.

2 Computers

The Institute supports an ongoing effort to help the researchers within the insti-tute with the use of high performance computing for the solution of demandingproblems arising in their practice. In particular, the Institute is strengthen-ing collaboration with high performance computing centers within Russia andabroad. This not only provides the necessary computing resources but also facil-itates the exchange of knowledge and experience on parallel programming andon efficient use of high performance computers for solving demanding problemsof today.

Projects such as HPC–Europa1 make access barriers to supercomputing sig-nificantly lower for newcomers. At the moment there is no analogue of HPC–Europa in Russia. In future Grid computing holds a promise of unification ofaccess policies within different supercomputing centers.

In the meanwhile the Institute collaborates with the Joint Supercomputingcenter of the Russian Academy of Sciences (JSC), Siberian supercomputing cen-ter (SSCC), High Performance Computing Center in Stuttgart (HLRS) and withUniversity supercomputing centers such as the computing center of South UralState University. National and international supercomputing centers provide re-sources sufficient to satisfy the needs of most users. High degree of centralizationof computing equipment helps to reduce the maintenance and service costs. Onthe other hand, gaining access to supercomputing centers is not always easy. Itsometimes involves a significant time delay.

There are several reasons for fostering collaboration with specialised com-puting centers and at the same time maintaining and developing of computingresources within the Institute. First, there are cases when immediate interac-tive access to the computing resources is particularly important. Such cases in-clude development of new codes, debugging and performance analysis. Second,communication between the users and the maintainers of the computers pro-vides for more flexible usage policies and faster dissemination of knowledge andexperience.

The development of the computing facility in ICT SB RAS is governed by theneed to provide the researchers with an opportunity to develop new applicationscapable of solving large scale problems, debug and improve the performanceof the codes. At the same time it is assumed that at certain point of devel-opment the major part of computations will be performed using the resourcesof remote supercomputers, therefore the computing facility strives to providecomputers of different architectures for the evaluation of different computintechniques.

The first compute cluster to support parallel processing of computationallyintensive jobs was installed at the Institute in 1999–2000. Several users withinthe institute had access to the system’s parallel processing environment. In 2004a new cluster of dual processor nodes was installed. The first system was disas-sembled shortly before that. A new compute cluster and a preprocessing server

1 http://www.hpc-europa.org

Page 16: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Computing Facility of the Institute of Computational Technologies SB RAS 3

Fig. 1. The development of peak performance of computing systems in ICT SB RAS.In year 2004 a new 4 node cluster ARBYTE/8 was installed. In 2007 two new systemswere procured: a preprocessing server on Tyan VX 50 platform and a 7 node clusterTYMAN/28.

was installed in 2007. Figure 1 shows the historical development of the peakperformance of computing systems in ICT SB RAS.

Compute systems have different architectures. In the rest of this section wepresent a brief overview of the available systems. A summary of characteristicsis presented in the following table.

Table 1. Characterstics of the systems at ICT SB RAS

Xeon Linux Cluster Preprocessing server Opteron Linux cluster

Installation year 2004 2007 2007Memory architecture SMP cluster ccNUMA ccNUMA clusterPlatform Intel SE7501 Tyan VX50 Tyan GT24Number of compute nodes 4 1 7Type of node interconnect GigE N/A GigETotal memory 10 GB 32 GB 28 GBNumber of CPU cores 8 16 28Processor type Intel Xeon Opteron 880 Opteron 280Clock frequency 3.06 GHz 2.4 GHz 2.4 GHzFP Performance – GFlops peak 48.96 76.8 134.4OS RH Linux 9.0 SuSE Linux 10.0 SuSE Linux 10.0

2.1 Xeon Linux Cluster

The HPC LINPACK benchmark [1] measures the performance of a system on theproblem of dense matrix inversion. The floating point performance is measuredon solving a system of linear equations with a dense matrix.

We use the HPL implementation of the benchmark2. On Linpack test thesystem clocked slightly less than 32 GFlops. That gives an efficiency of 65%.2 HPL - A Portable Implementation of the High-Performance Linpack Benchmark for

Distributed-Memory Computers by A. Petitet, R.C. Whaley, J. Dongarra, A. Cleary,2004.

Page 17: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

4 Yu.I. Shokin et al.

0

2

4

6

8

10

12

14

16

0 2000

4000 6000

8000 10000

12000 14000

16000

150 160

170 180

190 200

210 220

0 2 4 6 8

10 12 14 16

GFlops

cl: 4 CPUs

NNB

GFlops

0

5

10

15

20

25

30

0 5000

10000 15000

20000 25000

30000

150 160

170 180

190 200

210 220

0

5

10

15

20

25

30

GFlops

cl: 8 CPUs

NNB

GFlops

Fig. 2. Linpack benchmark on Xeon Linux cluster. The graph on the left shows per-formance scaling using 4 CPUs. The graph on the right shows performance scaling on8 CPUs. Software: HPL, Intel C compiler 9.1, Intel MKL.

Figure 2 below presents the dependency of Linpack performance on the size ofthe input matrix and the size of basic block of the algorithm.

2.2 Preprocessing Server

The preprocessing server is an interesting system. The system is based on TyanVX 50 platform that is assembled of two quad socket motherboards connected viaHyperTransport links using PCI-Express interface. Four processors in the middleprocessors are connected with three other processors, while four processors atthe corners are connected with two processors only. The system is a ccNUMAwith 8 nodes. Each node has two CPU cores, making it 16 CPU cores altogether.

The preprocessing server can also be used for computations. We comparethe performance of this system to the performance of Xeon Linux cluster. TheLinpack results are shown in Figure 3. The aggregate performance of the systemis around 62 GFlops, which gives efficiency at the level of 80%.

It is important to note that the benchmark results are preliminary and maynot show the full potential of the systems. There is a room for improvement onthe operating system level as well as on the level of Linpack implementation.

4

6

8

10

12

14

16

0 2000

4000 6000

8000 10000

12000 14000

16000

150 160

170 180

190 200

210 220

4

6

8

10

12

14

16

GFlops

mist: 4 cores

NNB

GFlops

0

5

10

15

20

25

30

35

0 5000

10000 15000

20000 25000

30000

150 160

170 180

190 200

210 220

0 5

10 15 20 25 30 35

GFlops

mist: 8 cores

NNB

GFlops

Fig. 3. Linpack benchmark on the preprocessing server. The graph on the left showsperformance scaling using 4 CPU cores. The graph on the right shows performancescaling on 8 CPU cores. Software: HPL, SunStudio 12 EA C compiler, Goto BLAS.

Page 18: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Computing Facility of the Institute of Computational Technologies SB RAS 5

2.3 Opteron Linux Cluster

The Opteron Linux cluster is a cluster of nodes each containing 4 CPU coresin a two-node ccNUMA system. The cluster is currently using Gigabit Ethernetas the main transport interface connecting the nodes. It would be interestingto explore the opportunities of Ethernet controllers bonding to provide higherbandwidth between the nodes.

3 Applications

In this section we briefly describe some of the applications of high performancecomputing developed using the computing facility of the Institute.

One advantage of a centralized computing facility compared to workstationsis a higher degree of reliability. Computing systems are running without inter-ruption in a climate controlled environment with a higher reliability of the powersupply. This also extends the lifespan of individual hardware components.

Reliability considerations made Xeon Linux cluster initially used as a reliablecomputing resource for development and production runs of sequential codes.

Computing the propagation of laser pulses in high performance optical fiberswith nonlinear properties requires a significant amount of compute time. An ap-plication developed by a group in the institute compares the level of transmissionerrors for different encodings of data [2,8]. This requires running the simulationfor many different input bit patterns and can consume a significant amount ofCPU time.

At the same time researchers in the Institute using three-dimensional CFDmodels realised that the resources of workstations are not sufficient for studyingcomplex phenomena such as the explosive expansion of the airbag in automobiles[3]. The code was developed and parallelized in collaboration with HLRS and wasrun on the supercomputers available in the center. At the moment the numericalmodel is maintained at ICT SB RAS.

There was an interesting experience with the code for optimization of theshape of runner blades in the turbines of hydroelectric power stations [4]. Thecode is using the genetic algorithm paradigm for the solution of a multicriterialoptimization problem. Genetic algorithms are well suited for parallelization sincethe flow for each individual in a generation can be computed independently. Withthe use of 8 processors of the Xeon Linux cluster the runtime of one optimizationexperiment was reduced from 3-5 days to 17-22 hours.

Another application of high performance computing in the Institute is in thearea of cryptography and cryptoanalysis. The statistical tests developed in theInstitute require significant amounts of CPU time and memory [5,6]. This typeof application would not be possible without access to high performance com-puting resources. This is a well established area of research and the researchersuse mostly the resources of national computing centers and supercomputers atuniversities.

The research in photonics and in particular in modelling of nanostructuresin ultra high speed optical fibers requires significant computational resources

Page 19: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

6 Yu.I. Shokin et al.

and the parallelization of the algorithms. Good scalability was observed on thesolution of Maxwell equation in 2D for modelling nanostructures in optical fibersand other nanomaterials used in photonics research [7].

4 Training

One of the obstacles on a way to wider use of high performance computingsystems is the lack of experience in parallel programming and in general thelack of experience in the use of advanced computational technologies. Increasingthe level of qualifications in the area of parallel programming and high per-formance computing is one of the goals of a Russian-German Summer Schoolthat is organized at the Institute with the help of High Performance Comput-ing Center in Stuttgart (HLRS). The summer school is oriented primarily atthe young researchers working in the Institutes of the Siberian Branch of theRussian Academy of Sciences.

In collaboration with HLRS the Institute has established a Russian-GermanCenter for Computational Technologies and High Performance Computing thatplays an important role in fostering links between researchers in Germany andin Russia and in providing access to computational resources for applicationsthat have significant importance.

5 Conclusion

The purpose of this paper is to outline the reasons behind the development ofa computational facility within the Institute that can serve as an intermediatelevel between the facilities of supercomputing centers and personal workstations.

There is a wide scale of activities involved in making this vision a reality, thatinclude procuring the equipment, training the users and providing necessaryconsultations. We presented the results of this work that were achieved withina 2 year timeframe.

References

1. Dongarra, J.J.: Performance of various computers using standard linear equationssoftware. Tech. report of Computer Science Department, University of Tennessee,CS-89–85 (1993)

2. Shtyrina, O.V., Turitsyn, S.K., Fedoruk, M.P.: Kvant. Elektr. 35, 169–174 (2005)(in Russian)

3. Rychkov, A.D., Shokina, N., Bonisch, T., Resch, M.M., Kuster, U.: Parallel nu-merical modelling of gas-dynamic processes in airbag combustion chamber. In:Krause, E., Shokin, Yu.I., Resch, M., Shokina, N. (eds.) Computational science andhigh performance computing. 2nd Russian-German Advanced Research Workshop,Stuttgart, Germany, March 14-16, 2005. Notes on Numerical Fluid Mechanics andMultidisciplinary Design (NNFM), vol. 91, pp. 29–39. Springer, Heidelberg (2005)

Page 20: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Computing Facility of the Institute of Computational Technologies SB RAS 7

4. Lobareva, I.F., Cherny, S.G., Chirkov, D.V., Skorospelov, V.A., Turuk, P.A.: Com-put. Technologies 11, 63–76 (2006) (in Russian)

5. Ryabko, B.Y., Monarev, V.A., Shokin, Yu.I.: Probl. Inform. Transm. 41, 385–394(2005)

6. Ryabko, B.Y., Monarev, V.A.: J. Statist. Plann. Inference 133, 95–110 (2005)7. Prokopyeva, L.Y., Shokin, Yu.I., Lebedev, A.S., Fedoruk, M.P.: Parallel numeri-

cal modeling of modern fiber optics devices. In: Krause, E., Shokin, Yu.I., Resch,M., Shokina, N. (eds.) Compitational Science & High Performance Computing III.NNFM, vol. 101. Springer, Heidelberg (2007)

8. Shapiro, E.G., Fedoruk, M.P., Turitsyn, S.K.: J. Opt. Comm. 27, 216–218 (2006)

Page 21: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

HPC in Industrial Environments

M.M. Resch and U. Kuster

High Performance Computing Center Stuttgart (HLRS), University of Stuttgart,Nobelstraße 19, 70569 Stuttgart, Germany

{resch,kuester}@hlrs.de

Abstract. HPC has undergone major changes in recent years [1]. Arapid change in hardware technology with an increase in performancehas made technology interesting for a bigger market than before. Thestandardization that was used in clusters for years has made HPC anoption for industry. With the advent of new systems that start to devi-ate from standard processors the landscape changes again. In this shortpaper we set out to describe the main changes over the last years. Wefurther try to come up with the most important implications for indus-trial use of industry as we see it in a special collaboration between theHigh Performance computing Center Stuttgart (HLRS) and the indus-trial users in the region of Stuttgart.

1 Introduction

HPC has undergone major changes over the last years. One can describe themajor architectural changes over the last two decades in HPC in short as:

• From vector to parallel: Vector processors have long been the fastest avail-able with a huge gap separating them from standard microprocessors. Thisgap has been closed and even though vector processors still increase theirspeed single processor performance is not to be the driving force in HPCperformance. Hence there is a desire the desire to design parallel architec-tures that most recently has resulted in parallelism to be exploited at theprocessor level with multi-core and many-core architectures [2].

• From customized to commodity: As commodity processors caught up inspeed it became simply a question of costs to move from specialized compo-nents to standard parts.

• From single system to clusters: As systems were put together from stan-dard parts every system could look differently using other parts. This wentat the expense of loosing the single system look and feel of traditionalsupercomputers.

• From standards back to specialized systems: Most recently - as standardprocessors have run into problems of speed and heat dissipation new andspecialized systems are developed again bringing architectural developmentfor HPC experts to full cycle.

At the same time we have experienced a dramatic increase both in level ofpeak performance and in the size of main memory of large scale systems. As a

E. Krause et al. (Eds.): Comp. Science & High Perf. Computing III, NNFM 101, pp. 8–13, 2008.springerlink.com c© Springer-Verlag Berlin Heidelberg 2008

Page 22: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

HPC in Industrial Environments 9

consequence we can solve larger problems and typically we can solve them faster.It should not be ignored that the increasing theoretical speed is not matchedby a similarly increasing level of sustained performance. Still, today industrialusers can lay their hands on systems that are on the average ten times fasterthan systems five years ago. This is a chance on the one hand but it bringsnew questions on the other hand. In this paper we set out to discuss some ofthese questions and come up with some answers. The problems that come withmassively parallel systems and specifically with petascale systems are beyondthe scope of industrial usage and are discussed elsewhere [3,4].

2 Dual Use: Academia and Industry

The concept of integration of computational resources into seamless environ-ments was introduced by the name ”Grid” in 1999 by Ian Foster and others[5]. The basic idea with respect to high performance computing is to make allcomputational resources available to all potential users. Through the concept ofmiddleware complexities that had inhibited the usage of these systems by non-experts were supposed to be hidden. Ease of use should lead to a better usageof systems and better access to systems.

An immediate idea that follows from these concepts is to bring high perfor-mance computers out of the academic niche it was mainly used in. There werecertainly a number of large companies running HPC systems but then the Gridwas supposed to allow creating a pool of resources from all fields (academic andindustrial) and make them available to every one (academia and industry) ina simple way. The idea could be described as dual use of HPC resources andbecame rather popular with funding agencies.

2.1 Advantages

The main advantages of such a dual use are promoted by funding agencies. Thediscussion currently goes two ways.

• Reduction of costs: The argument goes that when industry can make useof academic computing resources funding costs for academia will go down.Industrial usage can be billed. The incoming funds can be used to at leastpartially pay the costs of the HPC system. this reduces costs for fundingagencies.

• Increased usage: Average usage of large scale systems is in the order of80-85 %. This is due to the fact that scheduling of resources can not achievea 100% usage if resources are kept for large scale simulations. The argumentgoes that industry could use spare cycles. This could be done specificallyduring vacation time when scientists reduce their usage of systems.

Discussion of AdvantagesThese assumed advantages have to be taken with a grain of salt. Costs for HPCcan potentially be reduced for academia if industry pays for usage of systems.

Page 23: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

10 M.M. Resch and U. Kuster

At the same time, however, industry takes away CPU cycles from academiaincreasing the competition for scarce resources. So if research agencies want tosupply industry with access to HPC systems they either have to limit access tothese same resources for researchers or invest additional money to make surethere is no reduction in usage for research. The only financial argument left isa synergistic effect that would allow to achieve lower prices if academia andindustry merge their market power to buy larger systems together.

The improved usage of resources during vacation time quickly turns out to bea too optimistic view as companies - at least in Europe - tend to schedule theirvacation time in accordance with public education. As a result industrial usersare on vacation when scientists are on vacation. Hence the industrial usage ofshared resources tends to shrink at the same time as research uses shrinks. A bet-ter resource usage by anti-cyclic industrial usage turns out to be not achievable.Some argue that by reducing prices during vacation time for industry one mightencourage more industrial usage when resources are available. However, here onehas to compare costs: the costs for CPU time are in the range of thousands ofEuro. a price reduction could help companies to safe thousands of Euro. On theother side companies would have to adapt their working schedules to the vaca-tion time of researchers and would have to make sure that their staff - typicallywith small children - would have to stay at home. Evidence shows that this isnot happening.

Nevertheless there is a potential in dual use of HPC resources that goes beyondthe high hopes of funding agencies. In 1995 the High Performance ComputingCenter Stuttgart has set up such a cooperation in order to explore the potentialof such dual use which is described in the following chapter.

2.2 A Public Private Partnership Approach

The University of Stuttgart had been running HPC systems for some 15 yearswhen in the late 1980s it decided to collaborate with Porsche in HPC opera-tions. This resulted in shared investment in vector supercomputers for severalyears. The collaboration turned out to be fruitful and in 1995 a public privatepartnership (called hww) was set up that also included Daimler Benz. The mainexpectation was to:

• Leverage market power: Combining the purchase power of industry andacademia helped to achieve better levels of price/performance for both sides.

• Share operational costs: Creating a group of operational experts helped tobring down the staff cost for running systems. Today a group of roughly 10staff members is operating 7 HPC systems.

• Optimize system usage: Industrial usage typically comes in bursts when cer-tain stages in product development require a lot of simulation. Industry thenhas a need for immediate availability of resources. in academia most simula-tions are part of long term research. It turned out that a good model couldbe found to intertwine these two different modes.

Page 24: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

HPC in Industrial Environments 11

These expectations were met by a close collaboration of all partners in thefirst years. However, there is a number of problems that have to be addressed.Some of them can be considered to be startup-problems. some of them are per-manent problems that show up continuously and require a continuous effort ofall partners involved.

Problems

In this paper we will not discuss the legal and organizational problems of settingup a public-private partnership in Germany. these issues have to be resolved.From a scientific point of view the key problem is an understanding for eco-nomic processes and economic thinking. These issues include personal attitudesof partners and have to be solved on an individual basis considering the nationallegal framework.

We will also ignore economic problems like accounting and billing which re-quire a good understanding of the total cost of ownership of hardware andsoftware resources’ availability. Again this is an issue that requires some un-derstanding of economic processes and depends heavily on the internal handlingof financial affairs of public organizations.

But right from the start some technical problems presented themselves. Themost pressing ones were:

• Security related issues: This included the whole complex of trust and reli-ability from the point of view of industrial users. While for academic usersdata protection and availability of resources are of less concern it is vitalfor industrial that its most sensitive data be protected and no whatsoeverinformation leak to other users. Furthermore, permanent availability of aresources is a must in order to meet internal and external deadlines.

• Data and communication: This includes both the question of connectivityand of handling input and output data. Typically network connectivity be-tween academia and industry is low. Most research networks are not openfor industry. Accounting mechanisms for research networks are often miss-ing. So, even to connect to a public institution may be difficult for industry.The amount of data to be transferred is another big issues as with increas-ing problem size the size of output data can get prohibitively high. Bothissues have been helped for by increasing speed of networks and a tendencyof research networks to open up to commercial users.

With all these issues the Grid [5] was quite a helpful tool to drive development.Specifically the problems of security were extensively addressed in a number ofnational and European projects.

A number of permanent problems remains and some new problems have shownup. These new problems are mainly related to operational procedures at industry.While 10 years ago industry in the region of Stuttgart was mainly using inhouse codes we have seen a dramatic shift towards the nearly exclusive usageof independent software vendor codes. This shift has put licensing issues andlicensing costs at the center of the discussion. What is requested from industry is

Page 25: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

12 M.M. Resch and U. Kuster

• Ease of use: Industrial users are used to prepare their simulation jobs in avisual environment. When accessing remote HPC platforms they have to usescripts or work at the command line level to submit and manage jobs.

• Flexibility: Industrial users would like to chose resources in a flexible waypicking the best resources for a given simulation.

3 Technologies to Help Industry

As described before the Grid is claims to be able to provide seamless access to anykind of resource. So far, however, solutions provided have a limited scope. Twonew technologies were hence developed at HLRS in order to support industrywhich are presented here.

3.1 Access to Resources

Access to resources is a critical task. Industrial simulation has long moved froma batch type mode to a more interactive style. Although typical simulations stillrequire several hours of compute time users expect to be able to easily chosethe right system and then manage the running job. HLRS has developed anenvironment (SEGL)that allows to define not only simple job execution but aset of jobs [6]. These jobs can be run on any chosen architecture. They canbe monitored and controlled by a non-expert. First results in an engineeringenvironment for combustion optimization are promising [7].

3.2 Visualization

One of the key problems in industrial usage of HPC resources is interpretationof the resulting data. Many of these problems are similar to academic problems.The amount of data is large and three-dimensional time-dependent simulationsare a specific challenge for the human eye. For industry we see an increasingneed to be able to fully integrate visualization into the simulation process [8]. atthe same time industrial simulation always goes hand in hand with experiments.In order to make full use of both methods the field of augmented reality hasbecome important [9]. HLRS has developed a tool called COVISE (COllaborativeVISualization Environment) that supports both online visualization and the useof augmented reality in an industrial environment [10]. In the future HLRS willintegrate SEGL and COVISE to make usage of HPC resources even easier.

4 Summary

HLRS has set up a public-private partnership mainly with automotive industryto share HPC resources. Over time a number of problems have been solved. Ithas turned out that both sides can benefit from such a collaboration. However,the use of public resources brings some new problems to academia which haveto be dealt with. On the other hand simplification of usage of public resources

Page 26: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

HPC in Industrial Environments 13

requires new and improved techniques to support the average - typically non-experienced - industrial user.There is still a lot of work ahead before usage ofHPC resources can become standard procedure in industry - specifically in smalland medium sized enterprisers.

References

1. Resch, M., Kuster, U.: Investigating the Impact of Architectural and ProgrammingIssues on Sustained Petaflop Performance. In: Bader, D. (ed.) Petascale Comput-ing: Algorithms and Applications, Computational Science series. Chapman & HallCRC Press, Taylor and Francis Group (2007)

2. Asanovıc, K., Bodik, R., Catanzaro, B., Gebis, J., Husbands, P., Keutzer, K.,Patterson, D., Plishker, W., Shalf, J., Williams, S., Yelick, K.: The Landscapeof Parallel Computing Research: A View from Berkeley; Electrical Engineeringand Computer Sciences University of California at Berkeley. Technical Report No.UCB/EECS-2006-183. December 18, 2006 (2006),http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html

3. Lindner, P., Gabriel, E., Resch, M.: Performance prediction based resource selectionin Grid environments. In: Perrott, R., Chapman, B.M., Subhlok, J., de Mello, R.F.,Yang, L.T. (eds.) HPCC 2007. LNCS, vol. 4782. Springer, Heidelberg (2007)

4. Resch, M., Kuster, U.: PIK Praxis der Informationsverarbeitung und Kommunika-tion, vol. 29, pp. 214–220 (2006)

5. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure.Morgan Kaufmann, San Francisco (1999)

6. Currle-Linde, N., Resch, M.: GriCoL: A Language to Bridge the Gap betweenApplications and Grid Services. In: Volkert, J., Fahringer, T., Kranzlmueller, D.,Schreiner, W. (eds.) 2nd Austrian Grid Symposium, Osterreichische ComputerGesellschaft (2007)

7. Resch, M., Currle-Linde, N., Kuster, U., Risio, B.: WSEAS Trans. Inf. ScienceApplications 3/4, 445–452 (2007)

8. Resch, M., Haas, P., Kuster, U.: Computer Environments - Clusters, Supercom-puters, Storage, Visualization. In: Wiedemann, J., Hucho, W.-H. (eds.) Progressin Vehicle Aerodynamics IV Numerical Methods. Expert-Verlag (2006)

9. Kopecki, A., Resch, M.: Virtuelle und Hybride Prototypen. In: Bertsche, B.,Bullinger, H.-J. (eds.) Entwicklung und Erprobung innovativer Produkte - RapidPrototyping. Springer, Heidelberg (2007)

10. Lang, U., Peltier, J.P., Christ, P., Rill, S., Rantzau, D., Nebel, H., Wierse, A.,Lang, R., Causse, S., Juaneda, F., Grave, M., Haas, P.: Fut. Gen. Comput. Syst.11, 419–430 (1995)

Page 27: 101 Notes on Numerical Fluid Mechanics and ...download.e-bookshelf.de/download/0000/0116/82/L-G-0000011682... · 101 Notes on Numerical Fluid Mechanics and Multidisciplinary Design

Parallel Realization of Mathematical Modelling

of Electromagnetic Logging Processes UsingVIKIZ Probe Complex

V.N. Eryomin1, S. Haberhauer2, O.V. Nechaev3, N. Shokina4,5,and E.P. Shurina5,6

1 Scientific Production Enterprise of Geophysical Equipment ”Looch”,Geologicheskaya Str. 49, 630010 Novosibirsk, Russia

[email protected] NEC - High Performance Computing Europe GmbH, Nobelstraße 19, 70569

Stuttgart, [email protected]

3 Trofimuk Institute of Petroleum Geology and Geophysics SB RAS,Koptyug Ave. 3, 630090 Novosibirsk, Russia

[email protected] High Performance Computing Center Stuttgart (HLRS), University of Stuttgart,

Nobelstraße 19, 70569 Stuttgart, [email protected]

5 Institute of Computational Technologies SB RAS,Lavrentiev Ave. 6, 630090 Novosibirsk, Russia

6 Novosibirsk State Technical University, K. Marx Ave. 20, 630092 Novosibirsk, [email protected]

1 Introduction

The electromagnetic methods are widely used for solving problems of surveyingand defectoscopy in geophysics. The method of high-frequency induction loggingisoparametric sounding (VIKIZ) [1] is directed toward reconstructing resistivityspatial distribution of rock, where oil- and gas-bearing wells are situated.

The VIKIZ method is based on measuring relative phase characteristics ofelectromagnetic quantities, namely, a phase difference of electromotive forcesinduced in receiver coils. In order to realize a transmission distance, resolvingpower and parameter sensitivity, the VIKIZ equipment and its modifications,for example, the VIKPB [2] borehole tool for high-frequency electromagneticlogging sounding, designed in the Scientific Production Enterprise of GeophysicalEquipment ”Looch” (http://www.looch.ru/index en.php), should consist ofseveral probes of high-frequency electromagnetic sounding of different depth.

The VIKIZ equipment consists of borehole and surface tools. A borehole tool(Fig. 1) includes logging tool complex and electronic measuring unit. A loggingtool complex consists of five electromagnetic logging probes of different depthsand the SP electrode. Every probe contains one transmitting coil and two receivercoils. A phase difference of electromotive forces induced in exploring coils ismeasured. The recorded parameter is identically related to resistivity of rock,surrounding a well.

E. Krause et al. (Eds.): Comp. Science & High Perf. Computing III, NNFM 101, pp. 14–30, 2008.springerlink.com c© Springer-Verlag Berlin Heidelberg 2008