Top Banner
Editor L S Fa Ins Jal Ph Fa rs: .H. Wiry .R. Pud aculty of Math stitut Teknolo lan Ganesha 1 hone : +62 22 ax : +62 22 Pro Confe Indus 6 th –8 th Institut yanto japrase hematics and ogi Bandung 10 Bandung, I 2 2502545 2 250 6450 ocee erence strial a h July 201 t Teknolo tya Natural Scien Indonesia. edin on and App 10 ogi Bandu nces ngof plied M ung IS f CIA Mathem SSN: 977-2 AM matics 208-70510-0 0-8
263

Proc.CIAM2010[1]

Nov 27, 2014

Download

Documents

Sameer Al-Subh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Proc.CIAM2010[1]

EditorLS FaInsJalPhFa

rs: .H. Wiry.R. Pudj

aculty of Mathstitut Teknololan Ganesha 1

hone : +62 22ax : +62 22

Pro ConfeIndus 6th – 8thInstitut

yanto japrase

hematics and ogi Bandung 10 Bandung, I2 2502545 2 250 6450

ocee

erence strial a

h July 201t Teknolo

tya

Natural Scien

Indonesia.

edin

on and App

10 ogi Bandu

nces

ng of

plied M

ung

IS

f CIA

Mathem

SSN: 977-2

AM

matics

208-70510-00-8

Page 2: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung‐Indonesia 2010

i

Electronic Proceeding

Conference on Industrial and Applied Mathematics

6 - 8 July 2010

The Committees of the conference Scientific Committee

Larry Forbes (University of Tasmania, Australia) Robert McKibbin (Messey University, New Zealand) Susumu Hara (Nagoya University, Japan) Edy Soewono (ITB, Indonesia) Chan basaruddin (University of Indonesia, Indonesia) Roberd Saragih (ITB, Indonesia)

Organizing Committee

L.H. Wiryanto (Chair) Sri Redjeki Pudjaprasetya Novriana Sumarti Andonowati Kuntjoro Adji Sidarto

Technical Committee

Jalina Wijaya Agus Yodi Gunawan Nuning Nuraini Janson Naiborhu Adil Aulia Lina Anugerah Ismi Ridha Ikha Magdalena

Maulana Wimar Banuardhi Indriani Rustomo Pritta Etriana Adrianus Yosia Rafki Hidayat Intan Hartri Putri Yunan Pramesi Haris Freddy Susanto

Page 3: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung‐Indonesia 2010

ii

Introduction This proceeding contains papers which were presented in Conference on Industrial and Applied Mathematics. The editors would like to express their deepest gratitude to all presenters, contributors/authors and participants of this conference for their overwhelming supports that turn this conference into a big success. While every single effort has been made to ensure consistency of format and layout of the proceedings, the editors assume no responsibility for spelling, grammatical and factual errors. Besides, all opinions expressed in these papers are those of the authors and not of the conference Organizing Committee nor the editors. The Conference on Industrial and Applied Mathematics is the first international conference held at Institut Teknologi Bandung-Indonesia, during July 6 – 8, 2010; hosted by Industrial and Financial Mathematics Research Division, Faculty of Mathematics and Natural Sciences ITB. The research division has continuing research interests in financial mathematics, optimization, applied probability, control theory and its application; biological, physical modeling and the application of mathematics in sciences, fluid dynamics, and numerical methods and scientific computing. The conference provided a venue to exchange ideas in those areas and any aspect of applied mathematics, in promoting both established and new relationships. Permission to make a digital or hardcopies of this proceeding for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage. Editors:

L.H. Wiryanto S.R. Pudjaprasetya Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung Jalan Ganesha 10 Bandung, Indonesia. Phone : +62 22 2502545 Fax : +62 22 250 6450

Page 4: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung‐Indonesia 2010

iii

Table of Content Page

The committees of the conference i

Introduction ii

Table of Content iii

Research Articles:

1 An adaptive nonstationary control method and its application to positioning control problems, Susumu Hara

1-8

2 Some aspects of modelling pollution transport in groundwater aquifers, Robert McKibbin

9-16

3 Jets and Bubbles in Fluids – Fluid Flows with Unstable Interfaces, Larry K. Forbes

17-25

4 Boundary Control of Hyperbolic Processes with Applications in Water Flow, M. Herty and S. Veelken

26-28

5 Isogeometric methods for shape modeling and numerical simulation, Bernard Mourrain, Gang Xu

29-33

6 FOURTH‐ORDER QSMSOR ITERATIVE METHOD FOR THE SOLUTION OF ONE‐DIMENSIONAL PARABOLIC PDE’S, J. Sulaiman, M.K. Hasan, M. Othman, and S. A. Abdul Karim

34-39

7 A Parallel Accelerated Over‐Relaxation Quarter‐Sweep Point Iterative Algorithm for Solving the Poisson Equation, Mohamed Othman, Shukhrat I. Rakhimov, Mohamed Suleiman and Jumat Sulaiman

40-43

8 Value‐at‐Risk (VaR) using ARMA(1,1)‐GARCH(1,1), Sufianti and Ukur A. Sembiring

44-49

9 Decline Curve Analysis in a Multiwell Reservoir System using State‐Space Model, S. Wahyuningsih(1), S. Darwis(2), A.Y. Gunawan(3 ), A.K. Permadi

50-53

10 Study of Role of Interferon‐Alpha in Immunotherapy through Mathematical Modelling, Mustafa Mamat, Edwin Setiawan Nugraha, Agus Kartono , W M Amir W Ahmad

54-62

11 Improving the performance of the Helmbold universal portfolio with an unbounded learning parameter, Choon Peng Tan and Wei Xiang Lim

63-66

12 Optimal Design The Interval Type‐2 Fuzzy PI+PD Controller And 67-71

Page 5: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung‐Indonesia 2010

iv

Superconducting Energy Magnetic Storage (SMES) For Load Frequency Control Optimization On Two Area Power System, Muh Budi R Widodo, M Agus Pangestu H.W

13 Dependence of biodegradability of xenobiotic polymers on population of

microorganism, Masaji Watanabe and Fusako Kawai

72-78

14 PROTOTYPE OF VISITOR DISTRIBUTION DETECTOR FOR COMMERCIAL BUILDING, Sukarman, Suharyanto, Samiadji Herdjunanto

79-85

15 APPLICATION ANFIS FOR NOISE CANCELLATION, Sukarman

86-93

16 THE STABILITY OF THE MECD SCHEME FOR LARGE SYSTEM OF ORDINARY DIFFERENTIAL EQUATIONS, Supriyono

94-99

17 Real Time Performance of Fuzzy PI+Fuzzy PD Self Tuning Regulator In Cascade Control, Mahardhika Pratama, Syamsul Rajab, Imam Arifin, Moch.Rameli

100-103

18 APPROXIMATION OF RUIN PROBABILITY FOR INVESTMENT WITH INDEPENDENT AND IDENTICALLY DISTRIBUTED RANDOM NET RETURNS AND MULTIVARIATE NORMAL MEAN VARIANCE MIXTURE DISTRIBUTED FORCES OF INTEREST IN FIXED PERIOD, Ryan Kurniawan and Ukur Arianto Sembiring

104-108

19 Stochastic History Matching for Composite Reservoir, Sutawanir Darwis, Agus Yodi Gunawa, Sri Wahyuningsih, Nurtiti Sunusi, Aceng Komarudin Mutaqin, Nina Fitriyani

109‐115

20 PROBABILITY ANALYSIS OF RAINY EVENT WITH THE WEIBULL DISTRIBUTION AS A BASIC MANAGEMENT IN OIL PALM PLANTATION, Divo D. Silalahi

116‐120

21 A Multi‐Scale Approach to the Flow Optimization of Systems Governed by the Euler Equations, Jean Medard T. Ngnotchouye, Michael Herty, and Mapundi K. Banda

121‐126

22 Modelling and Simulating Multiphase Drift‐flux Flows in a Networked Domain, Mapundi K. Banda, Michael Herty , and Jean Medard T. Ngnotchouye

127‐133

23 Calculating Area of Earth’s Surface Based on Discrete GPS Data, Alexander A S Gunawan , Aripin Iskandar

134‐137

24 Study on Application of Machine Vision using Least‐Mean‐Square (LMS), Hendro Nurhadi and Irhamah

138‐144

25 Cooperative Linear Quadratic Game for Descriptor System, Salmah

145‐250

Page 6: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung‐Indonesia 2010

v

26 ARMA Model Identification using Genetic Algorithm (An Application to Arc Tube Low Power Demand Data), Irhamah, Dedy Dwi Prastyo and M. Nasrul Rohman

151‐155

27 A Particle Swarm Optimization for Employee Placement Problems in the Competency Based Human Resource Management System, Joko Siswanto and The Jin Ai

156‐161

28 Measuring Similarity between Wavelet Function and Transient in a Signal with Symmetric Distance Coefficient, Nemuel Daniel Pah

162‐166

29 An Implementation of Investment Analysis using Fuzzy Mathematics, Novriana Sumarti and Qino Danny

167‐169

30 Simulation of Susceptible Areas to the Impact of Storm Tide Flooding along Northern Coasts of Java, Nining Sari Ningsih, Safwan Hadi, Dwi F. Saputri ,

Farrah Hanifah, and Amanda P.Rudiawan

170‐178

31 Fuzzy Finite Difference on Calculation of an Individual’ Bank Deposits, Novriana Sumarti and Siti Mardiah

179‐183

32 An Implementation of Fuzzy Linear System in Economics, Novriana Sumarti and Cucu Sukaenah

184‐187

33 Compact Finite Difference Method for Solving Discrete Boltzmann Equation, PRANOWO, A. GATOT BINTORO

188-193

34 Natural convection heat transfer with an Al2O3 nanofluids at low Rayleigh number, Zailan Siri, Ishak Hashim and Rozaini Roslan

194-199

35 Optimization model for estimating productivity growth in Malaysian food manufacturing industry, Nordin Hj. Mohamad, and Fatimah Said

200-206

36 Numerical study of natural convection in a porous cavity with transverse magnetic field and non‐uniform internal heating, Habibis Saleh, Ishak Hashim and Rozaini Roslan

207‐211

37 THE DISTRIBUTION PATTERN AND ABUNDANCE OF ASTEROID AND ECHINOID AT RINGGUNG WATERS SOUTH LAMPUNG, Arwinsyah Arka, Agus Purwoko, Oktavia

212‐215

38 Low biomass of macrobenthic fauna at a tropical mudflat: an effect of latitude?, Agus Purwoko and Wim J. Wolff

216‐224

39 Density and biomass of the macrobenthic fauna of the intertidal area in Sembilang national park, South Sumatra, Indonesia, Agus Purwoko and Wim J. Wolff

225‐234

Page 7: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung‐Indonesia 2010

vi

40 Intelligent traffic light system for AMJ highway, Nur Ilyana Anwar Apandi, Puteri Nurul Fareha M. Ahmad Mokhtar, Nur Hazahsha Shamsudin and Anis Niza Ramani and Mohd Safirin Karis

235‐238

41 Goodness of Fit Test for Gumbel Distribution Based on Kullback‐Leibler Information using Several Different Estimators, S. A. Al‐Subh , K. Ibrahim , M. T. Alodat , A. A. Jemain

239‐245

42 Impact of shrimp pond development on biomass of intertidal macrobenthic fauna: a case study at Sembilang, South Sumatra, Indonesia, Agus Purwoko, Arwinsyah Arka and Wim J. Wolff

246‐256

Page 8: Proc.CIAM2010[1]

An adaptive nonstationary control method andits application to positioning control problems

Susumu Hara(1)

(1)Department of Mechanical Science and Engineering, Graduate School of Engineering,Nagoya University, Nagoya, Japan

(Email: [email protected])

ABSTRACTThis plenary talk discusses an adaptive nonstationarycontrol (ANSC) method based on the progressive timecalculations of time-varying Riccati equations andshows its application to positioning control problems.Some nonstationary control methods generate usefulmotion trajectories. Moreover, ANSC is a practicalcontrol method for actual positioning problems.Concretely, this talk selects a positioning problem of aflexible structure installed on X-Y table and a modeswitching control problem of a power assist cart. Theeffectiveness of the ANSC method is verified via thetwo applications.

KEYWORDSAdaptive nonstationary control; motion control;positioning; power assist; time-varying Riccati equation;vibration control.

I. INTRODUCTIONPositioning is one of the most importantmechanical systems control tasks. It is veryimportant for conveyance devices, robots,information precision machinery, and so on.Recently, many mechanical structures havebecome light and/or large scale structures due tot h e e x p a n s i o n o f t h e i r a p p l i c a t i o n s .Simultaneously, control performances such astraveling period (seeking time) and settlingaccuracy have to satisfy hard specifications.Therefore, we cannot ignore the influences ofvibration modes of controlled objects in manyproblems. From such a point of view, positioningcontrol problems for vibration systems have beenincreased.

As one of effective control methods for positioningof vibration systems, “Nonstationary OptimalRegulator (NOR)” is well-known [1]–[4]. NOR isobtained by the finite-horizon LQ optimal regulator

problem based on the vibration system model of acontrolled object and produces superior transientperformance and vibrationless trajectories by asimple single-degree-of-freedom control systemstructure. Moreover, NOR does not need thereference design and generates useful trajectoriesby the feedback control algorithm. However,tuning of time-varying weights of LQ criterionfunction depends on trials and errors in many NORdesigns. Then, some controller designers arecompelled to solve inefficient controller designproblems. Hara proposed a semiautomatic NORdesign method based on GA meta-optimization [5].But, generally it takes much time to obtain theoptimized controller. These inefficiencies arecaused by the difference of calculation timedirections of controller design and responsesimulation. In NOR designs, at first, a time-varyingRiccati equation is solved by reversed timecalculations –(i). Secondly, the responses of acontrolled object are solved by progressive timecalculations –(ii). If the responses in (ii) do notsatisfy some specifications, we tune the time-varying weights again and repeat (i). Hence, both(i) and (ii) cannot be carried out simultaneously.Recently, the author has proposed an efficient time-varying gain type controller design method usingthe solutions of time-varying Riccati equations. Inthis method, we require the progressive timecalculations only [6], [7]. Both the time-varyingRiccati equation and the responses of a controlledobject are calculated simultaneously by theprogressive time calculations. This method isdifferent from the optimal control method basedon LQ criterion function. However, the responsesin this method become similar to the NOR caseresponses. Moreover, the progressive timecalculations of the Riccati equation enable us to

lhw
Stamp
lhw
Stamp
Page 9: Proc.CIAM2010[1]

simplify the weight tuning [6]. In [6] and [7], thecombination of the above method and a randomsearch technique for the weight tuning was appliedto a positioning control problem of a vibrationsystem.

The progressive time calculations of the Riccatiequations mean that the time-varying feedbackgains are obtained by real-time computations inactual implementations. This feature is convenientfor the realization of some kind of adaptive control.Therefore, this plenary talk discusses an adaptivenonstationary control (ANSC) method based onthe feature of the author’s previous study andshows its application to positioning controlproblems. At first, we select a flexible structureinstalled on an X-Y table as a controlled objectexample. The ANSC method in this talk offerssingle nonstationary controller design realizing thepositioning motion, the estimation of the naturalfrequency during access motion, and the residualvibration reduction (settling) using the estimatedinformation. Moreover, not only the vibrationsystem application but also another ANSCapplication to the mode switching control of apower assist cart is shown. In this system, thecontrol mode switches smoothly from the ANSCbased access control to manual positioning withnonstationary impedance control (nonstationarypower assist control). The effectiveness of theANSC method is verified via the two applications.

The contents of this paper are arranged from theauthor’s original papers [8] and [9] for the plenarytalk in CIAM2010, Bandung, Indonesia.

II. ADAPTIVE NONSTATIONARYCONTROL METHOD

A. Nonstationary Control Method Using theSolutions of Time-Varying Riccati EquationsThis subsection summarizes “NonstationaryOptimal Regulator (NOR)” [1]–[4] and theauthor’s previous method [6], [7]. Here, weconsider that a controlled object is described asthe following single-input time-varying systemstate equation:

( ) ( ) ( ) ( ) ( )t t t t u t= +x A x b (1)

where ( )tx and ( )u t are state variables and controlinput, respectively. This model includes the

vibration characteristics of the controlled object.NOR means the LQ optimal regulator based onthe following finite-horizon (time: 0, ft t⎡ ⎤∈ ⎣ ⎦ )criterion function:

( ) ( ) ( ) ( ) ( )T 2

0

ftJ t t t r t u t dt⎡ ⎤= +⎣ ⎦∫ x Q x . (2)

The optimal control input is obtained by thefollowing state feedback formula:

( ) ( ) ( )bu t t t= − f x , ( ) ( ) ( ) ( )1 Tb t r t t t−=f b P (3)

where ( )b tf are time-varying feedback gains.( )tP are the solutions of the time-varying Riccati

equation:

( ) ( ) ( ) ( ) ( )Tt t t t t− = +P P A A P ( ) ( ) ( ) ( ) ( ) ( )1 Tt t r t t t t−− +P b b P Q . (4)

In NOR designs, at first, the time-varying Riccatiequation (4) is solved by reversed time ( )0ft →calculations under appropriate time-varyingweights ( )tQ , ( )r t and the boundary condition(e.g., ( )ft =P 0 ) –(i). Secondly, the responses ofthe controlled object are solved by progressive time( )0 ft→ calculations –(ii). If the responses in (ii)do not satisfy some specifications, we tune thetime-varying weights again and repeat (i). NORproduces superior transient performance andvibrationless trajectories by a simple single-degree-of-freedom control system structure. NOR does notneed the reference design as general servo systemsand generates useful trajectories by the feedbackcontrol algorithm.

However, both the calculations of (i) and (ii) cannotbe carried out simultaneously. This point makesNOR design complicated. Especially, tuning of thetime-varying weights takes much time in manyNOR designs. From such a point of view, the authorproposed an efficient time-varying gain typecontroller design method using the solutions ofnovel time-varying Riccati equations [6], [7]. Inthis method, the following time-varying Riccatiequation is utilized instead of (4):

( ) ( ) ( ) ( ) ( )Tddτ

τ τ τ ττ′

′ ′= +P

P A A P

( ) ( ) ( ) ( ) ( ) ( )1 Trτ τ τ τ τ τ−′ ′ ′ ′− +P b b P Q (5)

lhw
Stamp
lhw
Stamp
Page 10: Proc.CIAM2010[1]

where ( ) ( )tτ τ′ = =Q Q and ( ) ( )r r tτ τ′ = = . Thetime-varying weights are given to (5) followingtime progress. The control input is obtained by

( ) ( ) ( )bu t t t′= − f x( ) ( ) ( ) ( )1 T

b t r t t tτ τ τ−′ ′ ′= = = =f b P (6)

where ( )b t′f are time-varying feedback gains. Thecombination of (1) and (5) requires the progressivetime calculations only. Then, both the time-varyingRiccati equation and the responses of the controlledobject are calculated simultaneously. Theresponses in this method become similar to theresponses of NOR case. This strategy is useful forthe weight tuning and realizes efficient controllerdesign in comparison with NOR. The theory ofthis method is quite different from NOR based onthe criterion function (2). However, it is importantfor controller designers to obtain useful trajectoriessimilar to the optimal control results easily in manyactual applications. Generally, we cannot discussthe closed loop stability and robustness issues inthis method by conventional control theories.However, the controller designs are settled throughmany time-historical simulations. Therefore, thesepoints are checked by the response calculations[7]. In the author’s previous papers, the abovemethod with a random search technique for theweight tuning was applied to a positioning controlproblem of a vibration system [6], [7].

B. Use of Discrete-Time Riccati EquationsIn order to reduce load of computers, thissubsection introduces the use of the followingdiscrete-time Riccati equation instead of (5):

( ) ( ) ( ) ( ) ( ) ( ) ( )T T1d d d d d d dn n n n n n n′ ′ ′+ = −P A P A A P b

( ) ( ) ( ) ( ) ( ) ( ) ( )1T Td d d d d d dr n n n n n n n

−′ ′ ′⎡ ⎤+⎣ ⎦b P b b P Ai

( )d n′+Q (7)

where ( )d nA and ( )d nb are discrete-time matricesof ( )tA and ( )tb by the sampling period of actualimplementation, respectively. n ( )0,1, 2,n =means the sample number of discrete-time. Thereare some equations including both the continuous-time t and the discrete-time sample n in this paper,for convenience. Let us consider that these ncorrespond to appropriate t. The control input isobtained as follows:

( ) ( ) ( )bdu t n t′= − f x

( ) ( ) ( ) ( ) ( ) 1Tbd d d d dn r n n n n

−′ ′ ′⎡ ⎤= +⎣ ⎦f b P b

( ) ( ) ( )Td d dn n n′b P Ai . (8)

C. Adaptive Nonstationary Control (ANSC)MethodThe use of (7) and (8) implies that the time-varyingfeedback gains can be obtained by real-timecomputations in actual implementations. Thisfeature is convenient for the realization of somekind of adaptive control. We assume that theparameters possessing some information of thecontrolled object can be detected on-line/real-time.The real-time identified controlled object modelis described as follows:

( ) ( ) ( ) ( ) ( )m m m m mt t t t u t= +x A x b . (9)

The Riccati equation corresponding to the model(9) is

( ) ( ) ( ) ( )T1md md md mdn n n n′ ′+ =P A P A ( ) ( ) ( )T

md md mdn n n′− A P b

( ) ( ) ( ) ( ) 1Tmd md md mdr n n n n

−′ ′⎡ ⎤+⎣ ⎦b P bi

( ) ( ) ( )Tmd md mdn n n′b P Ai ( )md n′+Q (10)

where ( )md nA and ( )md nb are discrete-timematrices of ( )m tA and ( )m tb by the samplingperiod of actual implementation, respectively. Thecontrol input is obtained by

( ) ( ) ( )m bmd mu t n t′= − f x

( ) ( ) ( ) ( ) ( ) 1Tbmd md md md mdn r n n n n

−′ ′ ′⎡ ⎤= +⎣ ⎦f b P b

( ) ( ) ( )Tmd md mdn n n′b P Ai . (11)

The method utilizing (9)–(11) is termed “AdaptiveNonstationary Control method (ANSC method)”in this paper.

III. CONTROLLED OBJECTIn this paper, we select a controlled object as inFig. 1 as an example. This object consists of an X-Y table and a flexible structure installed on thetable. The table possesses the driving systemconsisting of DC servomotors, reduction gears, andtiming belts. The motion of single axis only (X-axis in Fig. 1) is considered in this paper. Whenthe table is settled in some point, the naturalfrequency of the flexible structure is 4.55 Hz. The

lhw
Stamp
lhw
Stamp
Page 11: Proc.CIAM2010[1]

parameters of the controlled object are summarizedin Table 1. In experiments, the angulardisplacement of the motor is detected by anencoder. The vibration status of the flexiblestructure is detected by both an acceleration pickupinstalled on the structure and a gap sensor installedon the table. Control computations in experimentsare carried out by a digital signal processor (DSP,TMS320C31). Its sampling period is 1.0 ms.

If the flexible structure can be modeled by a single-degree-of-freedom vibration system, we can obtainthe controlled object model consisting of the

vibrating part and the base part as shown in Fig. 2.The state equation for this model is as follows [2],[4]:

( ) ( ) ( )p p p p p p rt t u t f= + +x A x b d (12)

( ) ( ) ( ) ( ) ( ) ( )T

p s s m mt x t x t t t i tθ θ⎡ ⎤= ⎣ ⎦x

where ( )sx t , ( )m tθ , and ( )i t are the absolutedisplacement of the tip of the vibrating part [m],the angular displacement of the motor [rad], andthe current of the motor [A], respectively. Thecontrol input ( )pu t is the motor voltage ( )e t [V].In (12), the influence of Coulomb friction in thedriving system is taken into account by using signfunctions in the same manner as in [2], [4].

If the influences of the motor’s inductance and theCoulomb friction are sufficiently small, (12) canbe approximated by the following equation [3]:

( ) ( ) ( )2 2 2 2 2p p p p pt t u t= +x A x b (13)

( ) ( ) ( ) ( ) ( )T

2p s s m mt x t x t t tθ θ⎡ ⎤= ⎣ ⎦x

( ) ( )2pu t e t= .

Generally, the influences of dampings are small incontrolled objects like the object in Fig. 1.Moreover, if we neglect the force from thevibrating part to the base part, more simple modelis obtained as follows:

( ) ( ) ( )3 3 3 3 3p p p p pt t u t= +x A x b (14)( ) ( ) ( ) ( ) ( )

T

3p s s m mt x t x t t tθ θ⎡ ⎤= ⎣ ⎦x( ) ( )3pu t e t=

2 2

3

0 1 0 0

0 0

0 0 0 1

0 0 0

s s

p

T E

l

r

K KrI R

ω ωα

α

⎡ ⎤⎢ ⎥⎢ ⎥−⎢ ⎥

= ⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦

A,

3

000p

T

l

KrI Rα

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

b

w h e r e sω a n d lI a r e ss

s

km

ω = a n d

12l b m brI m J J

r rα

α α≡ + + , respectively..

IV. NUMERICAL CALCULATIONS ANDEXPERIMENTS

A. The Control Problem in This Section and the

Figure 1. Controlled object.

Acceleration pick-up

Vibration axis

Encoder (X)

Moving axisX-axis

DC servomotor withan encoder (X)

(Unit: mm)

Reduction gears

DC servomotor withan encoder (Y)

Timing beltY-axis

Encoder (Y)

Aluminum plate(300×300×10)

Aluminum flexibleplate (800×50×5) 1270

780

12701000 1000

Gap sensor

Aluminum rigidplate

fr

ks

cs ms

mb

xb(t)xs(t)

rb,

θb(t)F(t)r

Jbrm, Jm

DC servomotor: L, R, KT, KE, B, θm(t), i(t), e(t)

Timing beltBase part

Vibrating part

xr(t)≡xs(t)-xb(t)

Reductiongears

Pulley

mskscsmbJbJmrα

KTKE

LR

Bfr

Equivalent mass of the vibrating partEquivalent spring constant of the vibrating partEquivalent damping coefficient of the vibrating pt.Equivalent mass of the base partMoment of inertia (Base side gear and the pulley)Moment of inertia (Motor and the motor side gear)Radius of the pulleyGear ratio (= rb/rm)Inductance of the motorResistance of the motorTorque constant of the motorBack electromotive force constant of the motorBraking constant of the motorCoulomb friction force of the driving system

4.50 kg

22.1 kg3.57×10-5 kg・m2

8.33×10-4 kg・m2

0.0159 m11.0

2.0 mH1.0 Ω

0.215 N・m/A0.215 V・s/rad

1.69×10-4 N・m・s/rad

131.4 N

1.28 N・s/m3.68×103 N/m

Figure 2. Controlled object model.Table 1. Parameters of the controlled object model.

lhw
Stamp
lhw
Stamp
Page 12: Proc.CIAM2010[1]

Feature of the ANSC MethodIn this section, we discuss the positioning andresidual vibration reduction of the flexiblestructure in Fig. 1. The vibration during accessmotion is permitted in this section. Then, we canestimate parameters on the vibration characteristicduring access motion and use the estimated datato reduce the residual vibration. In this section, avery simple method for the natural frequencyestimation is applied. As mentioned in the previoussection, the influence of damping of the flexiblestructure can be neglected in this section. Then,the natural frequency is estimated by the “peak-to-peak” period measurement of the t ipacceleration as in Fig. 3. If the ANSC method isapplied to the model (14) without the weights onvibration control, the controller does not reducevibration. When the weights increase, the vibrationreduction is begun as in Fig. 4. Therefore, we canestimate the natural frequency exactly when theweights are zeros. Different from the ANSCmethod, the conventional NOR produces somenonzero feedback gains and reduces vibration evenif the weights become zeros as in Fig. 4 [2].Therefore, NOR requires the exact controlledobject model a priori. However, the ANSC methodoffers single nonstationary controller designrealizing the positioning motion, the naturalfrequency estimation during access motion, and

the residual vibration reduction (settling) using theestimated information. From such a point of view,the ANSC method is a nonstationary controllerdesign method possessing some adaptability.

B. Numerical CalculationsThis subsection mentions numerical calculationexamples in [8]. Let the access distance set 0.1 munder the access period is about 2 s. The models(14) and (12) are applied to the controller designand the response calculations, respectively. Weprepare two kinds of weights – the time-varyingweights on the angular displacement of the motor(for positioning) ( ) 2

posq t⎡ ⎤⎣ ⎦ and on the relativevelocity of the tip of the structure (for residualvibration reduction) ( ) 2

vibq t⎡ ⎤⎣ ⎦ . These weights aretuned by taking account of the above problemsetting and the constraints of actual experimentalsystem. As a result, ( )md t′Q , ( )posq t , and ( )vibq tare set as follows:

( ) ( ) ( )Tmd obj objt t t′ =Q C C (15)

( ) ( )( ) ( ) ( )

0 0 00 0 /

posobj

vib vib

q tt

q t r q tα⎡ ⎤

= ⎢ ⎥−⎣ ⎦C

( ) ( ) ( ) 12/ 1.0e5 2.0e7/ 1.0 + exp 10.0( 2.0)posq t r tα ⎡ ⎤= + − −⎣ ⎦

( ) ( ) 125.0e3 / 1.0 + exp 10.0( 1.8)vibq t t⎡ ⎤= − −⎣ ⎦ .

The weight on the control input ( )mdr t′ is set to 1.0( ) 1.0mdr t′ =⎡ ⎤⎣ ⎦ in this section. ( )posq t and ( )vibq t

are shown in Fig. 5 (a) and (b), respectively.

The time-varying feedback gains obtained by theseweights are shown in Fig. 5 (c)–(f). Here, wecalculate (10) under ( )0md′ =P 0 . The gains for themotor variables (Fig. 5 (e) and (f)) increases after0 s by following the weight ( )posq t (Fig. 5 (a)).The access motion is realized by these gains. Onthe other hand, the gains for vibration reduction(Fig. 5 (c) and (d)) are almost zeros until about1.3-1.5 s. Then, the vibration reduction is notcarried out during access motion. The estimationstops when the table passes 1.0 cm before theobjective settling point. We have to leave thecalculation responses out for want of space. Thereaders can see the results in [8]. The responsesare similar to the experimental results in thefollowing subsection.

C. ExperimentsExperimental result of the ANSC method

Weight q(t)

tf0Time t [s]

Feed

back

gain

Gain isnot 0.

Gain is 0.

Prop

osed

met

hod

Conv

entio

nal m

etho

d

: Initial point : Terminal point

Figure 3. Estimation of the natural frequency.

Figure 4. Comparison of feedback gains (proposed ANSCmethod and conventional nonstationary optimal regulator).

lhw
Stamp
lhw
Stamp
Page 13: Proc.CIAM2010[1]

corresponding to the numerical calculation isshown in Fig. 6 (bold lines). As a reference, Fig. 6

0 0.5 1 1.5 2 2.5 3 3.5 4-1

0

1

2

3

4

5

6

7

8

Time [s]

Squ

are r

oot

of th

e w

eig

ht

on

the a

ngu

lar

disp

lacem

ent

of th

e m

oto

r

0 0.5 1 1.5 2 2.5 3 3.5 4-2500

-2000

-1500

-1000

-500

0

500

Time [s]

Feedb

ack

gain

on t

he d

ispl

acem

ent

of th

e t

ip

0 0.5 1 1.5 2 2.5 3 3.5 4-2

0

2

4

6

8

10

12

Time [s]

Feedb

ack

gain

on t

he a

ngu

lar

disp

lacem

ent

of th

e m

oto

r

0 0.5 1 1.5 2 2.5 3 3.5 4-0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

Time [s]

Feedb

ack

gain

on t

he a

ngu

lar

velo

city

of th

e m

oto

r

0 0.5 1 1.5 2 2.5 3 3.5 4-10

0

10

20

30

40

50

60

70

80

Time [s]

Squ

are r

oot

of th

e w

eig

ht

on

the r

ela

tive

velo

city

of th

e t

ip

0 0.5 1 1.5 2 2.5 3 3.5 4-25

-20

-15

-10

-5

0

5

Time [s]

Feedb

ack

gain

on t

he v

elo

city

of th

e t

ip

Figure 5. Time-varying weights and feedback gains.

0 0.5 1 1.5 2 2.5 3 3.5 4-5

0

5

10

15

20

25

30

35

40

Time [s]

Nat

ura

l fr

equ

ency

[Hz]

Figure 7. Natural frequency estimation (experiment).

Figure 6. Experimental results(bold: ANSC method; thin: the controller basedon a time-invariant natural frequency is used.).

0 0.5 1 1.5 2 2.5 3 3.5 4-0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

Time [s]

Dis

plac

em

ent

of

the t

ip [

m]

0 0.5 1 1.5 2 2.5 3 3.5 4-0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

Time [s]

Dis

plac

em

ent

of th

e b

ase [

m]

0 0.5 1 1.5 2 2.5 3 3.5 4-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Time [s]

Accele

ration o

f th

e t

ip [

m/s2

]

0 0.5 1 1.5 2 2.5 3 3.5 4-0.2

-0.1

0

0.1

Time [s]

Velo

city

of th

e b

ase [

m/s]

0 0.5 1 1.5 2 2.5 3 3.5 4-5

-4

-3

-2

-1

0

1

2

3

Time [s]

Curr

ent

of th

e m

oto

r [A

]

0 0.5 1 1.5 2 2.5 3 3.5 4-30

-20

-10

0

10

Time [s]

Inpu

t vo

ltag

e [

V]

(a) Weight on the ang. disp. of the motor (b) Weight on the relative velocity of the tip

(d) Feedback gain on the velocity of the tip

(e) Feedback gain on the ang. disp. of the motor (f) Feedback gain on the ang. velo. of the motor

(d) Velocity of the base

(e) Current of the motor (f) Input voltage

(a) Displacement of the tip (b) Displacement of the base

(c) Acceleration of the tip

(c) Feedback gain on the disp. of the tip

includes the result of the case that the nonstationarycontroller based on the time-invariant model whosenatural frequency is 2.0 Hz is applied (thin lines).In the experiments, taking the influence of noiseson the acceleration pickup into account, the naturalfrequency is estimated by the acceleration datasampled every 15.0 ms. The estimated result isindicated in Fig. 7. Discrete-time matrices ( )md nAand ( )md nb are obtained by using the zero-orderholder and Taylor approximation of matrixexponential function (terms whose orders arehigher than the square of sampling period aretruncated.).

From the comparison of both the results in Fig. 6,we can understand that the ANSC method reducesthe residual vibration effectively. Therefore, theeffectiveness of ANSC is verified experimentally.

V. APPLICATION TO MODE SWITCHINGCONTROL OF POWER ASSIST CART(ANSC-SERVO CONTROL CASE)In spite of popularization of industrial robots, thereare some processes which require operator’sactions instead of the uses of autonomous robots.For example, many operators handle and positionheavy objects in automotive assembly processes[10]. For that reason, various power assist systemsbased on the impedance control have beendeveloped.

If we consider many applications of power assistsystems, the positioning objects have been traveledto around the power assist system by a conveyancesystem as shown in the operation type - I in Fig. 8.However, in order to improve the efficiency ofoperation and reduce the physical burden ofoperators, we should pay attention to thecooperation of both the conveyance and the powerassisting functions. It is ideal that both theautomatic conveyance and the power assistprocesses are realized by single actuator and thenovel controller for the smooth switching of both

lhw
Stamp
lhw
Stamp
Page 14: Proc.CIAM2010[1]

HandleForce sensor

Gears

ServomotorEncoder

Weight

Driving wheel

Auxiliary wheelAcceleration pickup

300

600

363

Figure 9. A power assist cart.

mn Mass of the model 20.0 kg (Nominal) - 50.0 kg (Max)

Jm Moment of inertia of the motor kg・m2

Jg Moment of inertia of the motor side gear kg・m2

Jw Moment of inertia of the wheels kg・m2

α = rr/rm Gear ratiorw Radius of the wheel m

KTKER

Torque constant of the motor

Back electromotive force constant of the motor

Resistance of the motor

N・m/A

V・s/rad

Ω

15.0

0.045

12.1

0.18

0.18

2.13 X 10-5

8.47 X 10-5

4.56 X 10-4

(a) Weight on the servo variable qs(t) (b) Weight on the velocity qb(t)

(c) Feedback gain for the servo variable (d) Feedback gain for the velocity of the cart

0 1 2 3 4 5 6 7 8 9 10-2

0

2

4

6

8

10

12

Time [s]Weig

ht

on t

he s

erv

o v

aria

ble (

x 1.0

e5)

0 1 2 3 4 5 6 7 8 9 10-200

0

200

400

600

800

1000

1200

1400

Time [s]

Weig

ht

on t

he v

elo

city

of

the c

art

0 1 2 3 4 5 6 7 8 9 10-1200

-1000

-800

-600

-400

-200

0

200

Time [s]

Feedb

ack

gain

on t

he s

erv

o v

aria

ble

0 1 2 3 4 5 6 7 8 9 100

50

100

150

200

250

Time [s]Fee

dba

ck g

ain o

n t

he

velo

city

of t

he c

art

Figure 10. Time-varying weights and feedback gainsfor SCM and BM.

(a) Displacement of the cart (b) Velocity of the cart (dashed line: reference)

(c) Current of the motor (d) Control input voltage

(e) Acceleration signal of the cart (f) Force of the operator

0 1 2 3 4 5 6 7 8 9 10-0.5

0

0.5

1

1.5

2

2.5

Time [s]

Dis

plac

em

ent

of th

e c

art

[m]

0 1 2 3 4 5 6 7 8 9 10-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

Time [s]

Velo

city

of th

e c

art

[m/s]

0 1 2 3 4 5 6 7 8 9 10-1

-0.8-0.6-0.4-0.2

00.20.40.60.8

1

Time [s]

Curr

ent

of th

e m

oto

r [A

]

0 1 2 3 4 5 6 7 8 9 10-40

-30

-20

-10

0

10

20

30

Time [s]

Inpu

t vo

ltag

e [

V]

0 1 2 3 4 5 6 7 8 9 10-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

Time [s]

Accele

ration o

f th

e c

art

[m/s2

]

0 1 2 3 4 5 6 7 8 9 10-15

-10

-5

0

5

10

15

Time [s]

Forc

e [

N]

Figure 11. Experimental result.

the processes. It enables us to save control energyand improve operators’ comfortable. The operationtype - II in Fig. 8 shows its scheme. Hereafter, eachprocess is called “mode” in this section.

Figure 8. Conveyance and power assist processes.

Operator(s)

Object(s)

Object(s)

Conveyance system(Servo control)

Power-assist system(Impedance control)

Actuator

Impedancecontrol mode

Braking modeServo controlmode

Floor

Operation type - I

Operation type - II

Object(s)

Operator

Taking that situation into account, the authorstudied on the positioning of a cart (Fig. 9) bymeans of a smooth switching from the servo controlmode (SCM) to the impedance control mode (ICM)in [9]. The smooth switching in [9] means thecontinuous switching of the control inputs of pluralcontrol modes. For the purpose of the smoothswitching, the braking mode (BM) was insertedbetween SCM and ICM. Both SCM and BM were

lhw
Stamp
lhw
Stamp
Page 15: Proc.CIAM2010[1]

designed simultaneously by means of a “servotype” ANSC. The servo type means that we obtainthe ANSC time-varying feedback gains for theaugmented system consisting of the originalcontrolled object system and an integrator for theerror between a reference and the correspondingactual value. This technique is similar to so calledLQI control. In such a control problem, controlledobject parameters such as mass on the cart oftenvaries in wide range. But the mass can be measuredby using a loadcell. Then, we assume that all theparameters of the controlled object are time-invariant and can be determined at the starting timeof control. The time-varying Riccati equation forthe augmented system is calculated online duringSCM and BM. For further details of this study, thereaders may refer to [9].

Examples of the time-varying weights andfeedback gains are shown in Fig. 10. “Servovariable” in Fig. 10 corresponds to the statevariable of the integrator. An experimental resultsare shown in Fig. 11. In this result, the periodbetween 0 and about 5 s is SCM. In the vicinity of6 s, the operator catches the cart and the controlmode is switched from BM to ICM smoothly. InICM, the operator can settle the cart easily bypower assisting.

VI. CONCLUSIONSThis plenary talk discussed an adaptivenonstationary control (ANSC) method based onthe progressive time calculations of Riccatiequations and showed its application to positioningcontrol problems. Concretely, this talk selected apositioning problem of a flexible structure installedon X-Y table and the mode switching controlproblem of a power assist cart. The effectivenessof the ANSC method was verified via the twoapplications.

ACKNOWLEDGMENTSThe author would like thank Dr. Leo Wiryanto andDr. Roberd Saragih from Institut TeknologiBandung for inviting the author to a plenary talkin CIAM2010, Bandung, Indonesia. The author’stravel for CIAM2010 is supported by the Daikofoundation, Nagoya, Japan.

REFERENCES[1].Yamada, I and Nakagawa, M (1985).

“Reduction of Residual Vibrations inPositioning Control Mechanism,” TransASME, J Vib, Acoust, Stress, and Reliab inDesign, Vol 107, No 1, pp 47-52.

[2].Hara, S and Yoshida, K (1994). “SimultaneousOptimization of Positioning and VibrationControls Using Time-Varying CriterionFunction,” J Robot and Mechatro, Vol 6, No4, pp 278-284.

[3].Hara, S (2000). “Reference TrajectoryGeneration Method for Servo PositioningControl of Flexible Structures,” Fifth Int ConfMotion and Vib Contr, Sydney, NSW,Australia, Vol 1, pp 91-96.

[4].Hara, S (2005). “Unified Design of Time-Varying Gain Type Access Control and IntegralType Servo Control by Means of NonstationaryOptimal Control Method,” MicrosystTechnolo, Vol 11, No 8-10, pp 676-687.

[5].Hara, S (2002). “Nonstationary OptimalPositioning Controller Design Using GA Meta-Optimization,” 2002 American Contr Conf,Anchorage, AK, USA, pp 3165-3167.

[6].Hara, S (2004). “Efficient Design of Time-Varying Gain Type Controller and PositioningControl of Vibration Systems,” IEEJ TransIndust Applications, Vol 124, No 8, pp 843-844.

[7].Hara, S (2005). “Efficient Design of Time-Varying Gain Type Controller and PositioningControl of Vibration Systems –Taking Accountof Parameter Variations–,” IEEJ Trans IndustApplications, Vol 125, No 9, pp 871-878.

[8].Hara, S (2006). “Adaptive NonstationaryControl Using Solutions of Time-VaryingRiccati Equations–Its Application toPositioning Control of Vibration Systems,” 9thInt Conf Control, Automat, Robotics andVision, Singapore, pp 403-408.

[9].Hara, S, Yamada, Y, and Lee, S (2006).“Power-Assist Control Switching fromAdaptive Nonstationary Servo Control toForce Sensorless Nonstationary ImpedanceControl,” 2009 IEEE Int Conf Indust Technolo,Churchhill, VIC, Australia, pp 289-294.

[10].Yamada, Y, Konosu, H, Morizono, T, andUmetani, Y. (1999). “Proposal of Skill-Assist: A System of Assisting Human Workers byReflecting Their Skills in Positioning Tasks,”1999 IEEE Int. Conf. Systems, Man,Cybernetics, Tokyo, Japan, Vol IV, pp 11-16.

lhw
Stamp
lhw
Stamp
Page 16: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

9

Some aspects of modelling pollution transport in groundwater aquifers

Robert McKibbin Centre for Mathematics in Industry, Institute of Information and Mathematical Sciences

Massey University at Albany, Auckland, New Zealand Email: [email protected]

ABSTRACT Dissolved chemical species disperse as they are advected by a fluid flowing within a permeable matrix. Here, some mathematical models that may be used for quantifying the spread of a pollutant species or tracer in groundwater systems are developed, with a view to estimating passage times to, and concentrations at, fluid withdrawal points. The parameters in the pde's that describe the fluid flow and pollutant flux may be constant within each layer, but can be different in each of the various layers. This allows aquifer systems with bedded structures to be modelled without having to resort to full-scale numerical simulations. KEYWORDS Porous media; pollutants; tracers; groundwater aquifers; mathematical modelling; dispersion. I. INTRODUCTION As dissolved chemical species are advected by a fluid flowing within a permeable matrix, they are dispersed by mixing as the fluid makes its way through complicated pathways in the porous material. This means that pollutants that find their way into groundwater aquifers not only move "down-stream", but also spread in all directions. In general, the rate of dispersion of a chemical species dissolved in groundwater depends on the porous structure and the fluid speed (see [1], [2], [3] for various discussions). In this paper, some mathematical models that may be used for quantifying the spread of a pollutant species in groundwater systems are developed; it is assumed that the fluid flow remains steady. Usually, groundwater systems have layered structures determined by different events in the geological processes that formed them. The sub-layers in such a system have different matrix properties; in turn, a typical sub-layer has a thickness and matrix properties that may vary with map coordinates (c.f. [4], where layer thicknesses and properties were both assumed constant).

In the more general case, the advection-dispersion equations that model the fluid and species transport then have coefficients that depend mainly on depth, but with a layer composition that changes slowly with horizontal distance. Here, the coefficients in the pde's that describe the fluid flow and pollutant flux are generally assumed constant with depth in each layer, but can be different in each of the various layers. This allows aquifer systems with bedded structures to be modelled without having to resort to full numerical simulations. The layer thicknesses are assumed small compared to the lateral extent of the aquifer, and their interface slopes are also small. Within each layer, changes in concentration may occur by advection and dispersion, and also by transfer of the pollutant across the layer interfaces when the concentrations within adjoining layers are not equal. A layered system may also be used as an approximate realization of an aquifer with properties that vary smoothly with depth, allowing semi-analytic solution of realistic physical problems where dispersion takes place in all directions (see [4], for example). Some illustrative results will be presented. For clarity, the examples shown are where the species injection is via horizontal line sources, thereby enabling 2-D solutions to be depicted. The signature of the species at withdrawal points is shown to depend strongly on the layering, as well as on the injection and sampling depths. The models incorporate most of the important structure of groundwater aquifers and the transport of dissolved species therein, without having to resort to large-scale simulation, thereby allowing efficient examination of the effects of parameter variation.

Page 17: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

10

II. FLUID FLOW EQUATIONS Uniform aquifer The equations of conservation of mass and momentum for the isothermal flow of an incompressible fluid (water) in a rigid, uniform, permeable (rock) matrix, are [2]:

%u 0

%u

K

(p

%g)

K

(P)

(1)

Here, %

u (u,v, w) is the specific discharge or Darcy velocity in a Cartesian coordinate system (x, y, z) where the gravitational acceleration is %

g (0,0,g) . The fluid density [kg m–3], fluid dynamic viscosity [kg m–1 s–1] and the matrix permeability K [m2] are all constants; p [Pa] is the fluid pressure, P(x, y, z) p(x, y, z) p0 gz is the dynamic pressure [fluid pressure adjusted for hydrostatic pressure (fluid weight)] and p0 is a datum pressure. The bottom and top boundaries at z z

b(x, y) ,

z zt(x, y) are impervious and smoothly-

varying, as is the aquifer thickness, which is given by h(x, y) z

t(x, y) z

b(x, y) . The

two-dimensional (horizontal) volume flux vector is %

q(x, y) (qx(x, y),q

y(x, y)) , whose

components are defined by:

qx(x, y) u(x, y, z)dz

zb ( x ,y )

zt ( x ,y )

h(x, y)u (x, y)

qy(x, y) v(x, y, z)dz

zb ( x ,y )

zt ( x ,y )

h(x, y)v (x, y)

(2)

where u (x, y) and v (x, y) are the components of the mean horizontal Darcy velocity. Then,

qx

xq

y

y

u

xv

y

dz

zb ( x ,y )

zt ( x ,y )

ut

zt

x u

b

zb

x v

t

zt

y v

b

zb

y

%u

b

b

%u

t

t

(3)

after use of (1). Here,

%u

b

%u(x, y, z

b(x, y)) , %

ut

%u(x, y, z

t(x, y))

are the fluid velocity vectors at the lower and upper boundaries where the latter are defined by, respectively:

b(x, y, z) z z

b(x, y) 0

t(x, y, z) z z

t(x, y) 0

The fluid velocities at these impermeable boundaries are parallel to the surfaces, and (3) becomes, exactly: q

x

xq

y

y

x

(hu ) y

(hv ) 0 (4)

The volume flux components may be written in terms of a (stream-) function (x, y) :

hu y

,ŹŹŹhv x

(5)

which satisfy (4) exactly. The mean dynamic pressure through the layer thickness is:

P(x, y) 1

h(x, y)P(x, y, z)dz

zb ( x ,y )

zt ( x ,y )

(6)

Then: x

(hP) P

xdz

zb ( x ,y )

zt ( x ,y )

Pt

zt

x P

b

zb

x

K

udzzb ( x ,y )

zt ( x ,y )

Pt

zt

x P

b

zb

x

K

hu Pt

zt

x P

b

zb

x

(7)

after using (1), and where

Pb P(x, y, z

b(x, y)),ŹP

t P(x, y, z

t(x, y))

are the dynamic pressures on the bottom and top boundaries. A similar equation is found for the y-derivative. Then:

Page 18: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

11

hP

x

K

hu (Pt P)

zt

x (P

b P)

zb

x

hP

y

K

hv (Pt P)

zt

y (P

b P)

zb

y

(8)

Integration of the z-component of Darcy's Law in (1) gives:

qz hw w(x, y, z)dz

zb ( x ,y )

zt ( x ,y )

K

P

z

dzzb ( x ,y )

zt ( x ,y )

K

(P

b P

t)

(9)

It is proposed that this model would apply to systems where the horizontal gradients of the bottom and top confining surfaces are small. Consider the mass conservation equation in (1), rearranged:

w

z

u

xv

y

(10)

If u and v (and hence u and v ) are of order U, the aquifer thickness is of order H and the horizontal length over which significant variations occur is L, then (10) shows that w is

O(U ) , where H L is small. Equation (9) indicates that vertical variations in P are of order ( K )Hw ( K )HU . Therefore the last two terms in each equation of (8) are of order ( K )Hw(H L) ( K ) 2 HU , while the first term on the RHS of each is of order

( K )HU . For small , the last two terms on

the RHS of each equation in (8) are O( 2 ) smaller than the first and may be neglected. Then, to good accuracy, we may write:

P

x

K

u ,ŹŹŹP

y

K

v (11)

So, the mean velocity components may be written in terms of derivatives of and P as:

u 1h

y

K

P

x

v 1h

x

K

P

y

(12)

with: x

1h

x

y

1h

y

0

x

hP

x

y

hP

y

0

(13)

The streamlines and isobars are orthogonal: P 0 (14) Also note that for small interface slopes (i.e. small ), the dynamic pressure has little variation with depth, and therefore, to good approximation, P(x, y, z) P(x, y) P(x, y) (15) (Note that, for a uniformly-thick aquifer ( h(x, y) h , a constant), both and P are harmonic functions, as used in the classical theory.) The mathematical problem here reduces to solving one or both of (13), subject to suitable boundary conditions. Aquifer with uneven sub-layers In the more general case, water is supposed to be flowing steadily though an aquifer system that varies only slightly from the horizontal, and which is composed of N layers of permeable materials that may differ in thickness and matrix properties, all of which may also vary with horizontal position (x, y) . The system of total thickness h(x, y) is confined between impervious surfaces at:

z z0 (x, y) , z zN

(x, y) z0 (x, y) h(x, y) The aquifer sub-layer interface levels are at

z zi(x, y),Źi 1,ŹK , N 1, with the thickness

of Layer i given by:

hi(x, y) z

i(x, y) z

i1(x, y),Źi 1,ŹK ,ŹN ,

The system thickness is h(x, y) h

i(x, y)

i1

N

.

The normal fluid volume flow at each interface is continuous. As discussed above for the uniform aquifer, the flow within Layer i is supposed to be

Page 19: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

12

approximately uniform through the layer's thickness; the mean specific volume flux (Darcy speed) in that layer is denoted u

i(x, y)

[m s–1]. The total volume flow per unit width is %

q (qx(x, y),q

y(x, y)) [m2 s–1], given by the

sum of the layer fluxes %q

i(x, y) :

%q(x, y)

%q

i(x, y)

i1

N

hi(x, y)

%u

i(x, y)

i1

N

. (16)

The same approximations as for the single-layer case are made. From (1), the specific volume flux in each layer depends on the horizontal gradient of the mean dynamic pressure Pi

(x, y) in that layer; these pressures and their gradients are, with error O( ) and

O( 2 ) respectively, the same in all layers. With Pi

(x, y) P(x, y) for i 1,Ź,ŹN :

u

i,v

i Ki(x, y)

P

x,P

y

(17)

where Ki

(x, y),Źi 1,ŹK ,ŹN [m2] are the layer permeabilities. Note that this means that the flow direction in every layer at any planform point (x, y) is the same. Substitution of (17) into (16) gives

%q(x, y) q

x,q

y qxi

,qyi

i1

N

hi(x, y)K

i(x, y)

i1

N

1

P

x,P

y

(18)

The layer specific discharges are then:

ui,v

i Ki(x, y)

Kk(x, y)h

k(x, y)

k1

N

q

x,q

y (19)

The layer fluxes are separately given by

%q

i(x, y)

%u

i(x, y)h

i(x, y)

K

i(x, y)h

i(x, y)

Kk(x, y)h

k(x, y)

k1

N

%q(x, y) (20)

As in (4), q

xx q

yy 0 .

Since the volumetric discharges within the separate layers change with horizontal distance, interlayer flows must occur to maintain the total fluid volume. Denoting the flow per unit interface area from Layer i into Layer i 1 by w

i(x, y) , the required mass

balance for Layer i gives, to current accuracy:

wi(x, y) w

i1(x, y) q

xi

xq

yi

y

(21)

where w0 (x) w

N(x) 0 (the top and bottom

boundaries are impermeable). III. POLLUTANT FLOW EQUATIONS Uniform aquifer The model is of a fluid containing low concentrations of a soluble pollutant flowing steadily along a near-horizontal permeable layer that lies between impermeable upper and lower boundaries and has a slowly-varying thickness, as described in Section II above. In addition to parameters previously defined, the permeable matrix has porosity (x, y) [–]. In a small x-y planform region of area xy over a small time interval t, the change in the vertically-averaged mass of pollutant per unit volume of fluid c (x, y,t) [kg m–3] is due to advection, dispersion with coefficient D(x, y) [m2 s–1] (that may depend on the local fluid speed), and a distributed pollutant source with mass concentration per unit fluid volume f (x, y,t) [kg s–1 m–3], and is described by:

hxy[c (x, y,t t) c (x, y,t)]

ŹŹ

y x

[qxc ]x

x y

[qyc ]y

yx

hDc

x

x

xy

hDc

y

y

hxyf (x, y,t)t

t

ŹŹŹO(2 )

(22)

Page 20: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

13

This is a vertically-averaged form of the balancing of changes in pollutant mass in a small REV with advective and dispersive fluxes across the boundaries. Divide (22) by xyt and take the limit as x,y,t 0 :

hc

t

x

qxc

yq

yc

x

hDc

x

y

hDc

y

hf (x, y,t)

Expand, divide by (x, y)h(x, y) , use (2) and (4) and rearrange, to give:

c

t

u

(hD) x

h

c

x

v

(hD) y

h

c

y

ŹŹŹŹŹ D(x, y)2c

x22c

y2

f (x, y,t)

(23)

The initial condition is:

c (x, y,0 ) 0,Ź x, y (24) Example The general equation (23) for the vertically-averaged species concentration may be simplified in certain cases. Consider, for example, a homogeneous aquifer whose thickness varies in only in one direction which is also the same as that of the steady fluid flow; this is the 1-D flow case. Equation (2) gives the total fluid flow per unit width:

q

x u(x, z)dz

zb ( x )

zt ( x )

h(x)u (x) (25)

while (4) asserts that qx

is a constant. From (11), the gradient of the mean dynamic pressure P(x) is given by

dP

dx

K

u K

qx

h(x).

Suppose a mass of pollutant per unit area of the aquifer, Q [kg m–2], is released as a plane source at a certain time (t t0 ) and place

(x x0 ) into the aquifer. From (23), the equation for the species concentration c (x,t) resulting from such a source, represented by f (x,t) Q (x x0 ) (t t0 ) , is:

c

t

u (x)

d[h(x)D(x)] dx

h(x)

c

x

ŹŹŹŹ=D(x)2c

x2Q (x x0 ) (t t0 )

(26)

with c (x,0 ) 0,Ź x . The solution of (26) clearly depends on the assumed form of the dispersion coefficient D(x) . For example, one possibility is that the dispersion is proportional to the mean interstitial flow speed U u : D(x) U (x) (27) where [m] is a dispersion length (dispersivity) that depends on the matrix structure (see [1], [2], [3] for discussions). This length is a constant for a uniform aquifer. Then, since h(x)D(x) h(x)U (x) q

x

is constant, (26) becomes: c

tU (x)

c

x U (x)

2c

x2

Q (x x0 ) (t t0 ) (28)

Attempts to find an analytic solution for this equation have been unsuccessful so far; efforts are continuing. So standard numerical methods were used to solve (28). Space allows only a couple of illustrative examples to be shown here. The thickness profile is selected to be periodic, of the form h(x) 1 asin(x) . In comparison with aquifers of varying thickness, the case of uniform thickness ( a 0 ) may be solved analytically. The mean interstitial flow speed U is constant and the equation to be solved is:

Page 21: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

14

c

tU

c

x U

2c

x2

Q (x x0 ) (t t0 ) (29)

with the well-known analytic solution:

c (x,t)

Q

4Ute

( xx0Ut )2

4Ut (30)

The concentration profiles after two particular times (t = 2, 4) for a system with a 0 and otherwise the parameters listed in Table 1, is shown in Figure 1. In Figure 2, the profiles after the same times are shown for a system with uneven thickness. (Note that the chosen times, as well as the parameter values listed in Table 1 are arbitrary.) Figure 2(a) shows the aquifer profile. The pollutant concentration in the aquifer at time t = 2 is shown in Fig. 2(b) and for t = 4 in Fig 2(c). Table 1. Parameter values used for example of pollutant

transport in a porous layer with uneven thickness and uniform matrix properties.

a 0.8 2 0.1

qx 0.25

0.2 Q 1

t0 0

x0 0

Figure 1. Uniform layer

(a) Predicted pollutant concentration after t = 2. (b) Predicted pollutant concentration after t = 4.

Other parameters are listed in Table 1.

Figure 2. (a) System shape, as described in the text. Source is marked at x = 0.

(b): Predicted pollutant concentration after t = 2. (c): Predicted pollutant concentration after t = 4.

Other parameters are listed in Table 1. The average interstitial flow speed in both cases is about 2.5 m s–1. After times of t = 2 and 4, the maximum concentrations are at about x = 5 and 10 respectively. The variations in Fig. 2(b), (c) are due to that of the fluid speed and hence also the dispersion rate. An instrument set at a particular position downstream would detect a wavy profile different from the smooth one of Fig. 1. Sets of field data are now being sought to find if any have this characteristic, and if the inverse problem could then be solved to infer aquifer thicknesses. Aquifer with uneven sub-layers While algebraically more complicated, the multi-layered case is treated in a similar way, but with allowance being made for advection and dispersion of the pollutant across the inter-layer boundaries. Within Layer i, where the vertically-averaged species concentration is ci

(x, y,t) , the pollutant mass change may be due to: advection and longitudinal dispersion of the species along the layer, advective gains or losses from neighbouring layers according to the interlayer flow direction and strength, cross-interface dispersion, and sources; this results in Equation (31):

(a)

(b)

(c)

(a)

(b)

Page 22: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

15

ih

i

ci

tx

qxic

i

ih

iD

Li

ci

x

y

qyic

i

ih

iD

Li

ci

y

i1

1 sgn(wi1)

2w

i1

c

i1

i1

1 sgn(wi1)

2w

i1

c

i

i

1 sgn(wi)

2w

i

c

i

i

1 sgn(wi)

2w

i

c

i1 ih

if

i(x, y,t)

(31)

where i

(x, y) [m s–1] is the interlayer transfer coefficient between Layers i and i 1 . These coefficients may be estimated by considering the dispersive species flux between the concentrations ci

(x, y,t) , ci1(x, y,t) in the two neighbouring layers across the interface at

z zi(x, y) where a nominal concentration

i(x, y,t) is assumed. The dispersive fluxes

below and above the interface must match:

i

ci c

i1 ; iD

Ti(x, y)

c

z

zzi

; i1DTi1(x, y)

c

z

zzi

(32)

where the dispersion coefficients transverse to the main fluid flow are denoted by

DTi(x, y),Źi 1,ŹK ,ŹN . Vertical derivatives of

the concentration are approximated by difference formulae:

iD

Ti

i c

i

hi

2;

i1DTi1

ci1 i

hi1 2

(33)

After solution for i

(x, y,t) and substitution into (32), the transfer coefficients are given by:

1

i

h

i2

iD

Ti

h

i1 2

i1DTi1

,ŹŹi 1,ŹK ,ŹN 1 (34)

with 0 (x, y) N

(x, y) 0 (zero dispersive flux across the bottom and top boundaries). Note that the form of (34) is analogous to that for resistances in series. In general, the set of simultaneous pde's (31) is unable to be solved analytically (c.f. [4] for the uniform sub-layer thickness case). The system may be solved numerically over an x-y domain ( [x

u, x

d] [y

u, y

d] , for example), with layer

thicknesses, permeabilities, porosities, longitudinal and transverse dispersion coefficients, and boundary conditions being specified. Solution of the fluid flow equations for the total fluid volume flux %

q(x, y) then allows calculation of the pressure gradient, the longitudinal layer fluid speeds and fluxes, the interlayer fluid flows, the interstitial speeds, and the interlayer dispersion coefficients. In the examples shown here, the equations (31) were solved using a standard forward difference in time with error O(t) , and central differences in space with error O(x2 ,xy,y2 ) (suitable variations of the space formulae are applied at the ends of the x- and y-ranges to ensure the error order is the same). Spatial bc's for the pollutant concentrations are applied only at the upstream end of the solution region, which is chosen to be far enough away to allow zero concentrations to be specified. Example Consider a multi-layered system where the layer thicknesses vary in the x-direction only, and where the fluid flows in that direction under the influence of a pressure P(x) . For a source injected instantaneously as a plane strip parallel to the y-axis at x x

K in

Layer K at the initial time t = 0, into a flow where q

x 0 , q

y 0 , the following boundary

(35) and initial (36) conditions are applied:

ci(x

u, y,t)

ci

x(x

u, y,t) 0,

i 1,ŹK ,ŹN (35)

Page 23: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

16

cK

(x, y,0) fK (x x

K),

xu x x

d,ŹŹŹx

u x

K x

d

Źci(x, y,0) 0,ŹŹŹx

u x x

d,

i 1,ŹK ,ŹN ,Źi K

(36)

Lack of space precludes a comprehensive set of examples; only one illustration is shown here. A confined aquifer with a non-uniform total thickness that varies (periodically, in this case) in one horizontal direction only, given by h(x) H[0.8 0.2cos(5 x 7)] , is supposed to be composed of 11 layers (of equal thickness in this case) – see Fig. 3(a). The layers are numbered from bottom to top. (a)

(b)

(c)

(d)

(e)

Figure 3. (a) System shape, as described in the text, with

a low-permeability lens shaded. (b – e): Predicted pollutant concentrations after a certain

time, are shown under assumptions of: (b, c) vertical dispersion; (d, e) negligible vertical

dispersion. Release point is marked *.

In this example, the layer porosities, dispersion lengths and permeabilities are taken as uniformly equal, except for a region in Layers 5 and 6 that represents a clay lens with permeability 1/100 of that elsewhere; it is marked in Fig. 3(a). A pollutant is released near the top (in Layer 11) or near the bottom

(in Layer 2). One of two assumptions is made: either there is vertical dispersion [Figs 3(b), (c)] or not [Figs 3(d), (e)]. The effects of the lens and the dispersion model choice are seen to be significant. As may be noted, after being borne by advection and longitudinal dispersion, the pollutant's highest concentration remains at the level in the aquifer where the release was made, but some has spread vertically as well as along. In Figs 3(b), (c) there is a residual amount that has dispersed into, and which is slow to leave, the low-permeability region. Even neglecting vertical dispersion in the model, the effect of the low-permeability lens on the fluid movement, and hence that of the vertically-advected pollutant, is evident in Figs 3(d), (e). Because there is little fluid flow within the lens, the water has to move around it, and advects the pollutant vertically in the layers near the ends of the lens. In both Figs 3(d), (e), this has moved the pollutant towards the middle of the aquifer, effectively providing dispersion by advective processes. IV. SUMMARY & CONCLUSIONS Some mathematical models that may be used for quantifying the spread of a pollutant species or tracer in geologically-layered groundwater systems have been developed, with a view to estimating passage times to, and concentrations at, fluid withdrawal points. Examples show that the concentration signature of the species depends strongly on the layering, as well as on the injection and sampling depths. Use of geometric stratum thickness data enables solutions to be found with modest computational resources. REFERENCES [1]. J. Bear (1972). "Dynamics of fluids in

porous media," New York: Dover. [2]. J. Bear and A. Verruijt (1987). "Modeling

groundwater flow and pollution," Holland: Reidel.

[3]. J. Bear and Y. Bachmat (1991). "Introduction to modeling of transport phenomena," The Netherlands: Kluwer.

[4]. R. McKibbin (2009). "Groundwater pollutant transport: transforming layered models to dynamical systems," An. St. Univ. Ovidius Constanta, Serie Matematica Vol 17(3), pp 183-196.

Page 24: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

17

Jets and Bubbles in Fluids – Fluid Flows with Unstable Interfaces

Larry K. Forbes(1) (1)School of Mathematics and Physics, University of Tasmania, Hobart, Tasmania, Australia

(Email: [email protected])

ABSTRACT This presentation discusses two types of problems in interfacial fluid mechanics. The first involves a jet or plume of light fluid moving upwards through a surrounding heavy fluid. When viscous effects are ignored, a symmetric jet rises vertically and eventually settles down to a shape that is independent of time. However an asymmetric jet evidently becomes unstable and never reaches a steady state. When viscous effects are introduced, this qualitative difference between the symmetric and non-symmetric jets disappears, as now neither of them can achieve a steady state, since viscosity causes the outer fluid to be entrained in both cases. The second problem is related to the first, and it involves a light fluid injected either cylindrically or spherically into a surrounding heavy fluid. Without viscous effects, the expanding interface develops points of infinite curvature within finite time, after which the solution fails to exist. However, when viscosity is re-introduced, the points of infinite curvature at the interface are replaced with points of high concentrations of vorticity. This results in over-turning mushroom-shaped plumes moving radially outwards. KEYWORDS Fluid mechanics; interfaces; instability: inviscid fluid; spectral methods; viscous fluid. I.INTRODUCTION Free-surface fluid mechanics is an important subject, and has practical uses in understanding jets, ocean waves, flow around ships, and so on. A classical reference is the article by Wehausen & Laitone [9] that summarizes many of the linearized (small amplitude) results. Large amplitude free-surface flows are more difficult to model, since the shape of the fluid region is unknown in advance. Mathematical models of large-amplitude free-surface flows are therefore inherently non-linear, and can usually only be

solved numerically. This can represent a significant challenge both for mathematicians and for computers, and until relatively recently only steady-state inviscid free-surface flows have yielded to accurate numerical solution. A canonical problem of this type was introduced by Tuck & Vanden-Broeck [8]. It asked the deceptively simple question: what is the steady-state shape adopted by the free surface of a stationary inviscid fluid, into which a line source is immersed? The authors [8] showed that there are two different steady solution types, due to the non-linearity of the problem, and a considerable literature on this problem has since developed. More recently, it has become possible to obtain numerical solutions to some time-dependent large-amplitude free-surface flows. This then raises the question of whether the non-linear steady-state solutions obtained earlier are stable, in the sense that they would be observed as the large-time limits of an unsteady solution. Furthermore, it is now known that unsteady solutions can become singular within finite time, when viscosity is ignored, due to the formation of points of infinite curvature at the free surface. This was first demonstrated by Moore [6]. Forbes et al. [4] developed a spectral solution method for solving certain unsteady inviscid problems, and confirmed the development of surface curvature singularities. Later, Forbes [2] used that technique to solve for Rayleigh-Taylor instability in inviscid fluids, and observed surface curvature singularities close to the asymptotically predicted time. He then used a spectral solution to solve the corresponding problem in viscous fluids, and demonstrated that the inviscid Moore curvature singularity is replaced with a small region of high vorticity. This then causes the interface to roll up into mushroom-shaped plumes.

Page 25: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

18

Here, these techniques are extended and applied to the study of jets and bubbles in fluids. Each of these situations is characterized by a light fluid displacing a heavy fluid, and so can be subject to Rayleigh-Taylor type instability. Spectral methods are introduced, to solve these problems both for inviscid and viscous fluids. These flows are realizable in the laboratory, and may also have relevance to “black smokers” in the ocean [5] and to modelling star growth [7]. II. INVISCID FLUID JETS Consider a stationary outer fluid 2, with an inner fluid 1 forming a rising jet, as illustrated in Figure 1.

2 1 2

x

y

x = XL (y,t) x = XR (y,t)

y = h

- 1 1 Figure 1. Illustration of dimensionless inviscid jet flow.

The jet makes an angle with the bottom vent.

The problem is non-dimensionalized so that the bottom vent has width 2. In addition to the angle at which the jet enters from the bottom vent, there are three other dimensionless parameters that characterize the flow. These are the height h of an upper wall through which the jet passes, a density ratio D of fluid 2 to fluid 1 (so that 1D ) and a Froude number F that measures the inlet speed of the jet at the bottom. Since fluid 1 is irrotational and incompressible, a velocity potential 1 can be defined in it, so

that the velocity vector is 1grad . Then the

horizontal and vertical components of velocity can be calculated as 1 1 /u x and

1 1v / y . The velocity potential satisfies Laplace’s equation

2 21 1

2 20

x y

, L RX x X (1)

On the bottom, there is the requirement

1

0, 1v ( ,0, )

sin , 1

xx t

x

on 0y (2)

and at the top of the plume, the condition

1( , , ) cosu x h t (3) is assumed. On the left-hand interface

( , )Lx X y t , there is a kinematic condition

1 1/ v /L Lu X t X y (4) which states that fluid 1 is not free to cross this interface, and a dynamic condition

2 211 1 2

1 1

2 2

( 1)v

Du y

t F

(5)

that expresses the continuity of pressure at this interface. The identical conditions to (4) and (5) also apply on the right-hand interface

( , )Rx X y t in Figure 1.

A spectral solution method has been developed to solve the non-linear time-dependent problem given by equations (1) – (5), based on the technique introduced by Forbes et al. [4]. The velocity potential is expressed as

1 0( , , ) cos sin ( )x y t x y A t

1

( ) cosh( )N

n nn

A t x

( )sinh( ) cos( )n n nB t x y (6)

and the two interfaces are written

1

( , ) 1 cot ( )sin( )N

L n nn

X y t y C t y

1

( , ) 1 cot ( )sin( )N

R n nn

X y t y D t y

. (7)

The constants in (6) and (7) are defined as (2 1) /(2 )n n h . (8)

The velocity potential (6) satisfies Laplace’s equation (1) and conditions (2), (3) identically. The kinematic condition (4) and Bernoulli equation (5), and their equivalents on the right-hand interface ( , )Rx X y t , are Fourier

decomposed to yield a system of 4 1N ordinary differential equations for the 4 1N coefficients ( )nA t , ( )nB t , ( )nC t and ( )nD t

Page 26: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

19

and these are integrated foward in time using a fourth-order Runge-Kutta scheme [1, p.371]. It is possible to carry out an asymptotic solution of the governing equations (1) – (5), on the assumption that the jet is thin. The details are not given here, but the analysis is similar to classical shallow-water theory [9]. According to this approximation, there is a steady-state solution in which the vertical speed in the jet is

2 21v ( ) sin 2( 1) /y D y F (9)

and the two interfaces have the shapes 1( ) cos sin / v ( )LX y y y

1( ) cos sin / v ( )RX y y y . (10)

When the plume is vertical ( / 2 ) the two interfaces in (10) are anti-symmetrical.

-1.5 -1 -0.5 0 0.5 1 1.50

5

10

15

x

y

h = 15 ; D = 1.05 ; F = 1

t = 8t = 48

Dashed = Asymptotic

Figure 2. Inviscid vertical plume ( / 2 ) at two

different times. The steady asymptotic solution (10) is also shown (dashed line). Here, 15h , 1.05D , 1F .

Figure 2 shows numerical solutions at the two different times 8t , 48. The solution was started with initial conditions ( ,0) 1LX y ,

( ,0) 1RX y . As time progresses, a contraction

wave moves up the jet at a speed given approximately by (9), and eventually, the jet adopts a steady shape close to that predicted by the asymptotic solution (10). The evolution of an asymmetric inviscid jet is shown in Figure 3, for the four different times

8t , 12 , 16, 20. This example highlights the fact that an asymmetric jet does not approach the steady-state solution (10) as time increases, unlike the symmetric solution in Figure 2. Instead, the jet becomes unstable and develops small oscillations; eventually, Rayleigh-Taylor type instabilities (in which heavy fluid lies

above light fluid) cause the jet to break up, at which time the numerical solution technique fails.

-2 0 20

5

10

15

-2 0 20

5

10

15

-2 0 20

5

10

15

-2 0 20

5

10

15Inviscid ; h = 15 ; D = 1.05 ; F = 2 ; = 0.99/2

t = 8 t = 16 t = 20t = 12 x

y

Figure 3. Inviscid plume at angle 0.99 / 2 at four

different times. Here, 15h , 1.05D , 2F .

III. VISCOUS FLUID JETS Viscosity is now taken into account approximately, using a Boussinesq approach [2]. Rather than treat the system as two separate fluids with a sharp interface, the system is instead modelled as a single fluid with a finite interface thickness. The fluid density is written as ( , , ) ( , , )x y t D x y t , in which the

perturbation is regarded as “small”. The

mass conservation equation then splits into an incompressible component

/ v / 0u x y (11) and a weakly compressible transport equation

2 2

2 2vu

t x y x y

(12)

for . Equation (11) is satisfied identically

using a streamfunction , in terms of which the two velocity components may be written

/u y and v / x . (13) In addition, the vorticity vector (which is the curl of the velocity vector, and represents twice the angular velocity) has just the one component

2v / /x u y . (14)

The viscous Navier-Stokes equations can be re-written in terms of vorticity component (14), and under the Boussinesq approximation this gives rise to the vorticity equation

Page 27: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

20

2 2

2 2 2

v

1 1

e

ut x y

DF x DR x y

(15)

The constants in equation (12) and eR in

(15) are a diffusion coefficient and Reynolds number, respectively. In this approximate viscous model, boundary layers are ignored on the top and bottom walls; in addition, artificial walls are placed at x to bound the computational region, and likewise have no boundary layers. Therefore

0u on x (16) and

0, 1( ,0, )

1 , 1

xx t

D x

on 0y . (17)

For the general plume at an angle , the

solution for density perturbation and streamfunction is expressed in the form

( , , ) ( , ) ( , , )S Ux y t x y x y t

( , , ) ( , ) ( , , )S Ux y t x y x y t , (18) in which the “steady” density is

1 , cot 1( , )

0, 1 cotS D x y

x yx y

(19)

with a similar form for streamfunction. It is then appropriate to express the “unsteady” components in equation (18) as

1 1

( , , )

( )cos ( ) sin( )

U

M N

mn m nm n

x y t

B t x y

1 1

( , , )

( )sin ( ) sin( )

U

M N

mn m nm n

x y t

A t x y

(20)

with n as defined in (8) and /(2 )m m .

These expressions (20) are chosen to satisfy boundary conditions (16) and (17) identically. Equations (20) are substituted into the governing system (12) and (15), and after considerable algebra, a (large) system of 2MN ordinary differential equations is obtained for the Fourier coefficients ( )mnA t and ( )mnB t . This system is

integrated forward in time, using the classical fourth-order Runge-Kutta scheme [1, p.371].

-5 0 50

5

10

15

-5 0 50

5

10

15Streamfunction ; h = 15 ; D = 1.05 ; F = 1 ; = 5

-5 0 50

5

10

15

t = 12t = 6 t = 18 x

y

Figure 4. Streamlines for viscous vertical plume

( / 2 ) at three different times. Here, 15h ,

1.05D , 1F , and 5 .

A viscous plume is shown in Figure 4, for the same case as in Figure 2. Here, the streamfunction in (18) has been contoured to give streamlines for the flow, and is shown at three different times. Unlike the inviscid case, there is clearly no approach to a steady state flow, since now the outer fluid is entrained into the plume, as a result of viscosity.

-5 0 50

5

10

15

-5 0 50

5

10

15Streamfunction ; h = 15 ; D = 1.05 ; F = 2 ; = 5 ; = 0.99 /2

-5 0 50

5

10

15

t = 8 t = 16 t = 24 x

y

Figure 5. Streamlines for viscous plume at angle

0.99 / 2 at three different times. Here, 15h ,

1.05D , 2F , and 5 .

Figure 5 shows a solution for a viscous plume, for the same case as illustrated in Figure 3. Again, there is no approach to a steady-state solution, since viscosity causes entrainment of the outer fluid; this is a feature in common with the symmetric solution of Figure 4. Small vortices can be seen either side of the jet, and for the asymmetric case in Figure 5, they eventually

Page 28: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

21

cause the jet to develop sideways oscillations as it rises. IV. RADIAL OUTFLOW IN A CYLINDRICAL BUBBLE These spectral solution techniques can be applied equally successfully to more complex situations than the cartesian planar geometry illustrated in sections II. And III. Following the work of Tuck & Vanden-Broeck [8], a problem is now discussed in this section which is intended to represent possibly the simplest case of a non-planar Rayleigh-Taylor flow. An inner light fluid is injected through a line source at the origin, into a surrounding heavier fluid. The interface between the flow fluids is therefore subject to instabilities, and small disturbances at the initial time 0t can therefore be expected to grow as time progresses. Consider a cylinder of light fluid and initial radius a lying along the z -axis of a coordinate system. A line source is present on the axis, and ejects the light fluid at some constant volume rate (per length along the z -axis). There is a heavier fluid surrounding this cylinder. Once again, the problem is now non-dimensionalized, using the initial cylinder radius as the unit of length and the source strength divided by that radius as the unit of speed. Inviscid cylindrical outflow model. When viscosity is ignored in both the light inner fluid 1 and the heavy outer fluid 2, velocity potentials 1 and 2 can be defined in each

fluid, as before. It is appropriate to make use of cylindrical polar coordinates ( , , )r z , and to consider the interface between the two fluids to have the form ( , )r R t . Then Laplace’s equations in each fluid, equivalent to equation (1) above, become

2 2

2 2

10j j j

r r r

, 1,2j . (21)

In each fluid, the radial and tangential components of fluid velocity are /j ju r

and v (1/ ) /j jr . Near the line source,

the potential for the light inner fluid has the singular behaviour

1 log /(2 )r as 0r (22)

and by conservation of mass

2 0u , 2v 0 as r . (23)

There are now two kinematic conditions at the interface, replacing equation (4), and these take the forms

v jj

R Ru

t R

, 1,2j on ( , )r R t . (24)

Similarly, there is a dynamic condition that expresses the continuity of pressure at the interface, and it becomes

2 2 2 22 12 2 1 1

1v v

2D D u u

t t

2 2

( 1) 1 1( 1)

8 ( )

D tR D

F t F

on ( , )r R t . (25)

Here, D is the ratio of densities of the outer fluid to the inner fluid, and so 1D . The other parameter F is a type of Froude number based on the strength of a constant acceleration of gravity directed radially inwards; in the laboratory situation this term would be absent, and in that case F . The two velocity potentials can be written

11

log( , , ) ( ) cos( )

2

Mm

mm

rr t P t r m

21

log( , , ) ( ) cos( )

2

Mm

mm

rr t Q t r m

(26)

and the interface is represented in the form

01

( , ) ( ) ( )cos( )M

mm

tR t H t H t m

.

(27) As in Section II., these expressions (26), (27) are substituted into the three interfacial conditions (24) and (25), and eventually yield a system of 3 1M ordinary differential equations for the coefficients ( )mP t , ( )mQ t and ( )mH t , and

these are integrated forward in time using a fourth-order Runge-Kutta method, as before. The initial conditions are taken to be

(0) 0KH , (0) (0)K KP Q (28)

representing an initially circular interface with a small velocity disturbance at the K th Fourier mode.

Page 29: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

22

Viscous cylindrical outflow model. Once again, viscous effects can be accounted for using a Boussinesq approximation, using equations similar to (12) and (15). As in section III., a vorticity-streamfunction formulation is used, and now the streamfunction is represented in the form

1 1

( , , ) /(2 )

( ) / sin( )M N

mn m mnm n

r t

A t J j r m

(29)

replacing equation (20). The density perturbation is written

2 2( , , ) ( 1) /r t D r

1

( )sin( / )N

nn

r C t n r

1 1

( ) ( / )cos( )M N

mn m mnm n

r B t J j r m

. (30)

In these expressions (29), (30), the functions mJ

are first-kind Bessel functions of order m , and

mnj are their zeros. These are substituted into

the density transport equation and vorticity equation to derive ordinary differential equations for the Fourier coefficients, and these are integrated forward in time.

-2 -1 0 1 2-2

-1

0

1

2D = 1.05 ; F = 1 ; = 0.03 ; P

2(0) = - Q

2(0) =

x

yt = 1 ,2 , ... , 11

Figure 6. The evolution of the inviscid interface as time

increases. The solution is shown at eleven different times, for 2K . The scales on the axes are equal.

In Figure 6, the development of the cylindrical interface ( , )R t is shown for times

1, ,11t , for a case 1F in which a constant inwardly-directed acceleration of gravity is present. The interface initially has a circular shape, but its velocity was initially perturbed at the second mode, 2K , with the

small amplitude 0.03 . The solution fails to continue beyond about 12t , and to understand why, it is necessary to consider the curvature.

-4 -2 0 2 4-2

0

2-4

-2

0

2

x

D = 1.05 ; F = 1 ; = 0.03 ; P2(0) = -Q

2(0) =

y

t = 1 , 2 , ,,, , 11

Figure 7. Curvature at the interface, for the same case as shown in Figure 6.

Figure 7 shows the interfacial curvature for each of the eleven solutions in Figure 6. Initially, the interface is a circle of radius 1, and so its curvature is 1 , as expected. However, as the interface evolves, it develops regions of very high curvature at its poles. It appears that the curvature becomes infinite there, within about the finite time 12t , and this then causes the failure of the solution beyond this time. This is consistent with ideas in [6], and a Moore curvature singularity is expected to form.

x

y

Density D = 1.05 ; F = 1 ; = 0.03 ; = 5 ; K = 2 ; t = 12

-5 -3 -1 1 3 5-5

-3

-1

1

3

5

Figure 8. Viscous solution for the same case as in Figure 6, at the time 12t . The dashed line is the inviscid result at

the same time. The scales on the axes are equal. The corresponding viscous solution is shown in Figure 8, for the same case as in Figure 6, at the time 12t . These are contours of the perturbed density in equation (30), and they show clearly the location of the effective interface. The inviscid solution at this same

Page 30: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

23

time is overlaid on this diagram, and is shown with a dashed line. It agrees well with the viscous result, but it represents the last time at which it was possible to compute an inviscid solution, before the curvature singularity caused the failure of that solution.

x

y

Density D = 1.05 ; F = 1 ; = 0.03 ; = 5 ; K = 2 ; t = 30

-5 0 5-5

0

5

Figure 9. Viscous solution for the same case as in Figure

6, at the time 30t .

A careful study of the vorticity shows that the regions in which the inviscid solution developed curvature singularities are now replaced by regions of high vorticity in the viscous case, at the same locations and time. It seems clear that high curvature in the inviscid model is the precise trigger for vorticity to concentrate in the viscous case. This, in turn, causes the interface to roll up and develop over-hanging plumes. This is illustrated at the much later time 30t , in Figure 9. The small velocity disturbance at the 2K mode in the initial conditions (28) has developed into a large bi-polar plume with two mushroom-shaped outflows. These bi-polar shapes are believed to be of importance in the formation of stars in astrophysics [7]. Further details of these types of flows are available in a forthcoming paper by Forbes [3]. It is, in fact, possible to obtain different non-linear outflow modes for different choices of the integer K in the initial conditions (28), and different examples of those flows are available from that paper [3]. V. AXI-SYMMETRIC OUTFLOW IN A SPHERICAL BUBBLE The techniques of the previous section IV. can readily be extended to study outflow from a point source in three-dimensional geometry. In the interests of space, only a very brief overview

may be given here, but the techniques are similar in principle to those outlined in the previous sections. Spherical polar coordinates are now used, and to simplify the problem as much as possible, only axi-symmetric modes are considered. The flows are therefore rotationally symmetric about the z -axis, so that only the radial coordinate r and polar angle from the z -axis are involved. In the inviscid case, the two velocity potentials in the inner fluid 1 and outer fluid 2 are now represented in the forms

11

1( , , ) ( ) (cos )

4

Nn

n nn

r t B t r Pr

12

1

1( , , ) ( ) (cos )

4

Nn

n nn

r t C t r Pr

(31) And a similar expansion is used for the interface

( , )r R t . In these expressions, the functions

nP are Legendre polynomials. Once again, the

velocity potentials (31) and the representation of the interface are substituted into the two kinematic conditions and the dynamic condition at the moving interface, to derive a system of ordinary differential equations for the Fourier –Legendre coefficients ( )nB t , and so on. These

are then integrated forward in time using a Runge-Kutta method. The viscous problem is again modelled using the Boussinesq approximation, as in section III. In this approach, the density of the outer fluid is assumed to be only slightly larger than that of the inner fluid, and the unstable density ratio

1D is of interest here. Once again, a density transport equation and vorticity equation may be derived, and these are similar to equations (12), (15). In the following Figures 10 – 12, an axi-symmetric viscous solution is shown, at three different times. In each solution, the same density contour is illustrated, and has been chosen to correspond most closely to the location of the centre of the interface (which is of finite thickness in the viscous case). There is found to be close agreement between this density contour and the interface predicted by the inviscid equations (31) for early times; however, as in section IV., the inviscid solution

Page 31: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

24

fails at finite time, due to the formation of Moore-type curvature singularities at the interface.

Figure 10. Viscous axi-symmetric expansion from a point

sink at time 8t , for a solution with 1.05D , 0.5F , 0.03 and the bi-polar mode 2K .

Figure 11. Same case as Figure 10, for time 16t .

Figure 12. Same case as Figure 10, for time 24t .

The bubble surrounding the point source in Figure 10 is almost spherical, but as time progresses, the small bi-polar disturbance to the initial velocity grows unstably. Regions of high vorticity are formed at precisely the times and

locations at which the inviscid solution forms a curvature singularity, and eventually the two plumes form over-hanging regions, as in Figure 12. VI. CONCLUSIONS Spectral methods have been used here, to obtain highly accurate solutions for jets and for problems involving injected bubbles in fluids. The inviscid theory may predict the formation of Moore-type curvature singularities on the interface within finite time, and these are replaced with regions of high vorticity in the viscous theory. This causes the formation of over-hanging plumes as time progresses. ACKNOWLEDGEMENTS This work was supported in part by Australian Research Council grant DP1093658. REFERENCES [1]. Atkinson, KE (1978). An Introduction to

Numerical Analysis, Wiley, New York. [2]. Forbes LK (2009). "The Rayleigh-Taylor

instability for inviscid and viscous fluids", J. Engin. Math., Vol 65, pp 273-290.

[3]. Forbes, LK (2011). "A cylindrical Rayleigh-Taylor instability – radial outflow from pipes or stars", J. Engin. Math., special Tuck Memorial Issue, to appear.

[4]. Forbes LK, Chen MJ and Trenham CE (2007). "Computing unstable periodic waves at the interface of two inviscid fluids in uniform vertical flow", J. Comput. Phys., Vol 221, pp 269-287.

[5]. Jupp, T and Schultz, A (2000). "A thermodynamic explanation for black smoker temperatures", Nature, Vol 403, pp 880-883.

[6]. Moore, DW (1979). "The spontaneous appearance of a singularity in the shape of an evolving vortex sheet", Proc. Roy. Soc. London, Vol A365, pp105-119.

[7]. Stahler, SW and Palla, F (2004). The formation of stars, Wiley, Berlin.

[8]. Tuck, EO and Vanden-Broeck, J-M (1984). "A cusp-like free-surface flow due to a submerged source or sink", J. Austral. Math. Soc. Ser. B, Vol 25, pp 443-450.

[9]. Wehausen, JV and Laitone, J (1960) "Surface Waves", in: Handuch der Physik, Springer-Verlag, Berlin, Vol 9 pp 446-778.

Page 32: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

25

Page 33: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

26

1

Boundary Control of Hyperbolic Processes with Applications in Water Flow

M. Herty(1) and S. Veelken(2)

(1) RWTH Aachen University, Templergraben 55, D-52064 Aachen, GERMANY (Email: [email protected])

(2) RWTH Aachen University, Templergraben 55, D-52064 Aachen, GERMANY (Email: [email protected])

ABSTRACT This paper discusses a novel approach to boundary control of hyperbolic conservation laws. As working example the St. Venant equations for water flow in open canals are considered. The control problem is formulated and its solution is discussed. KEYWORDS Control laws; conservation laws; relaxation methods. I.INTRODUCTION In this paper we are concerned with systems of conservation laws describing for example water flow in horizontal open canals. A similar discussion holds also true in the case of gas networks described by the isothermal Euler equations. In the case of horizontal water canals with a constant rectangular cross–section and without friction the flow dynamics are described by a nonlinear system of two conservation laws (St. Venant equations)

0 (1)

12

0 (2)

Here, is the water height and its velocity. denotes the gravitational constant. The equation is supposed to hold in a canal parameterized by

0,1 and for 0. The equation is supplemented with suitable boundary conditions at

0 and 0,1 , respectively. For more details on the modeling and arising control questions we refer to [15, 11, 5, 6, 9, 3, 10, 8, 7, 4, 12] and the references therein.

A typical question in the context of water flow [3, 15] is as follows: how can we control the flow at the boundaries of the canal in order to stabilize it? More mathematically, we reformulate this question: Design boundary functions , , such that any given initial data , converges over time to a steady state. In the considered case the steady state is , 0. Boundary conditions are in general imposed as

, 0 , , 0 , 0, (3), 1 , , 1 , 0 (4)

for some given, smooth functions and . This questions has received attention in past years

and we refer to [3] for an overview of existing results. The approach followed in [3, 15] is based on linearization of the equations (1) and (2) around the steady state. The arising quasilinear system

, 0, (5)

, (6)

is then analyzed using suitable Lyapunov functions [6, 3]. Null–controllability and exponential convergence towards the steady state can be proven [6]. The results extend to the non homogenous case, networks of canals and also to the case of gas networks including friction terms [9]. However, the disadvantage of the previous approach is due to the fact that the linearization requires a sufficient regularity

0,1 of the solutions , , , . In general, this regularity constraint is not met. Therefore, in [13] a different approach based on relaxation has been proposed. This will be discussed in the following paragraph. II. RELAXATION OF BOUNDARY CONTROL PROBLEMS In [14] a relaxtion approach to conservation laws has been introduced and numericaly analyzed. It can be shown [14, 1, 17] that for a small parameter and up to order the following system is an approximation to (1) and (2).

0, (7)0, (8)

, (9)

. (10)

The approximation is dissipative provided that satisfies the sub characteristic condition

√ max,

, . (11)

The advantage of the relaxation formulation is due to the fact that the hyperbolic structure of the equations is preserved in this approximation. No linearization of the nonlinear terms is necessary. The disadvantage is the doubling of variables. The variables and are not needed to describe the water flow. For small it can be shown that these

Page 34: Proc.CIAM2010[1]

Proceedin

variables

respectiveland their n16].

For the extension obtain a breformulat

equivalent

where

For small v

Obviously,solutions and (13). following for some c

We estimato the inde

in or

Theorem

, such tha

there existsany initisatisfying t

and additio

the closedconditions

any

The proLyapunov

ng of Conf. on I

converge to

ly. Further resnumerical perfo

previous systeof Theorem

boundary contre (7)-(10) in

. I.e.to

,

and

values of we

,

, we can rec and of

In order to we prescribe lonstants an

, ,

ate the derivativependent variabrder to apply Th

1. We assum, then ther

at if

s , ial conditionthe boundary c

onally

d–loop system (15)-(16) ha

.

oof relies on function, see

Industrial and

o and

ults on relaxatormance we re

em we can no13.12 in [3] rol result. To n characteristi

and , the previou

d

have

cover and f the diagonal

simplify thelinear boundarnd

ves of and bles at the dataheorem 13.12[

me that re exists a posi

, and n inconditions (15)

,

(12)-(13) witas a unique This solution

a particular e Chapter 13

d Appl. Math., I

,

tion systems fer to [17, 2,

ow apply an in order to this end we ic variables

us system is

(12)

(13)

.

(14)

from the system (12)

e discussion ry conditions

(15)(16)

with respect a [3]:

(17)

and itive number

such that for -(16),

,

,

,

,

th boundary solution in satisfies for

choice of a .3 [3]. The

Indonesia 201

27

) )

previousolutiondata of

we obta

for Remarvalue norm orate .[13].

For differenNumer

In o

impliedintrodufrom nsatisfie

Furtherconditi

where an From tcontrol

apply tpreviou

relaxati

order

equatio

Under previouthe syconditiderived III. COWe prehyperblinearizresult. considenumeri

10

us theorem guns towards a st

f small —no

ain

. Some rem

rk 1. The resof . This c

of the initial . For a more d

a completelynt Lyapunov fical experimen

order to compd by (15) uced conditionnow on subcres

rmore, we cons, i.e., (3)–(4

is the givena characterist

the applicationl only in t

or , rthe following us formulation

by the bounion approxima

and

on for the contr

the assumptionus results we

ystem to rest. ons (16) a d

d.

ONCLUSIONesented a noveolic systems. Czation is neede

Future investering generaical studies.

uarantees exponteady-state for

orm. Obviously

marks are in ord

sult strongly dcontrols to so data as wel

detailed discuss

y different pfunction we

nts can also be f

pare the bounand (16) w

ns (Chap. 13.4ritical flow, i.

. consider so c4) are given by

n water level tic constant on it is desiredterms of the lorespectively. Tderivation. Wn the terms dary conditionation (14). W

and hence

rol and , r

,

n expect these cNote, that u

different contr

S el idea for bounContrary to exed in order to tigations will

al conservatio

nential decay arbirtrary initi

y, for

der.

depends on thome extend thll as the decasion we refer

proof using refer to [13

found in [13].

ndary conditionwith previous4[3]) we assume., the solutio

(1called spillway

, (19 (20

above the canof the spillwad to express thcal water heig

To this end we replace in thof an

ns (15) and thWe obtain up

an

the followin

respectively,

(2

(22

and using thcontrols to steusing boundaroller could b

ndary control xisting results n

obtain a contrbe carried o

on laws an

of ial

he he ay to

a 3].

ns sly me on

8)ay

9)0)

nal ay. he

ght we he nd he to

nd

ng

1)

2)

he eer ary be

of no rol out nd

Page 35: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

28

ACKNOWLEDGMENTS This work has been supported by DAAD Partnership Program and the German Research Foundation HE5386/8-1. REFERENCES [1]. D. Areba-Driollet and R. Natalini,

Convergence of relaxation schemes for conservation laws, Applicable Analysis, 61 (1996), pp. 163–193.

[2]. M. K. Banda and M. Seaid, Relaxation weno schemes for multidimensional hyper- bolic systems of conservation laws, Nu- mer. Methods Partial Differ. Equations, 23 (2007), pp. 1211–1234.

[3]. J.-M. Coron, Control and nonlinearity, American Mathematical Society, 2007.

[4]. J.-M. Coron, G. Bastin, and B. d’Andrea Novel, Dissipative boundary conditions for one-dimensional nonlinear hyperbolic systems, SIAM J. Control. Optim., 47 (2008), pp. 1460–1498.

[5]. J. M. Coron, B. d’Andrea Novel, and G. Bastin, A lyapunov approach to control irrigation canals modeled by saint venant equations, in ECC Karlsruhe, 1999.

[6]. J.-M. Coron, B. d’Andr´ea Novel, and G. Bastin, A strict Lyapunov function for boundary control of hyperbolic systems of conservation laws, IEEE Trans. Automat. Control, 52 (2007), pp. 2–11.

[7]. J. de Halleux, C. Prieur, J.-M. Coron, B. d’Andr´ea Novel, and G. Bastin, Boundary feedback control in networks of open channels, Automatica J. IFAC, 39 (2003), pp. 1365–1376.

[8]. M. Gugat, Boundary controllability between sub and supercritical flow, SIAM Journal on Control and Optimization, 42 (2003), pp. 1056–1070.

[9]. M. Gugat and M. Herty, Existence of classical solutions and feedback stabilization for the flow in gas networks, ESAIM, Control Optim. Calc. Var., (2010).

[10]. M. Gugat and G. Leugering, Global boundary controllability of the de st. venant equations between steady states, Annales de l’Institut Henri Poincar´e, Non- linear Analysis, 20 (2003), pp. 1–11.

[11]. M. Gugat, G. Leugering, K. Schittkowski, and E. Schmidt, Modelling, stabilization, and control of flow in networks of open channels, in Online optimization of large scale systems, M. e. a. Groetschel, ed., Springer, Berlin, 2001, pp. 251–270.

[12]. M. Herty, M. Gugat, A. Klar, and G. Leugering, Conservation law constrained optimization based upon front tracking, M2AN Math. Model. Numer. Anal., 40 (2006), pp. 939–960.

[13]. M. Herty and S. Veelken, Boundary control of nonlinear hyperbolic equations, preprint, (2010).

[14]. S. Jin and Z. Xin, The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Comm. Pure Appl. Math., 48 (1995), pp. 235–276.

[15]. G. Leugering and E. J. P. G. Schmidt, On the modelling and stabilization of flows in networks of open canals, SIAM J. Control Optim., 41 (2002), pp. 164–180.

[16]. H. L. Liu and G. Warnecke, Convergence rates for relaxation schemes approximating conservation laws, SIAM J. Numer. Anal., 37 (2000), pp. 1316–1337.

[17]. R. Natalini, Convergence to equilibrium for the relaxation approximations of conservation laws, Comm. Pure Appl. Math., 49 (1996), pp. 795–823.

Page 36: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

29

Isogeometric methods for shape modeling and numerical simulation

Bernard Mourrain(1)

(1)GALAAD, INRIA Méditerranée, Sophia Antipolis, France Email: [email protected]

Gang Xu(2)

(2)GALAAD, INRIA Méditerranée, Sophia Antipolis, France Email: [email protected]

ABSTRACT The topic of this paper is to give an overview of a recent approach, called isogeometric analysis, that aims at a seamless integration of geometric modeling and numerical computational. It is a uniform frame- work to describe both the geometry representation and approximate solutions of a simulation problem on this geometry. It rises interesting geometric problems, that we review. We describe the general framework of this approach, the interesting properties of B-spline bases, that it exploits. After showing on an example of 3D heat conduction problem hwo it works, we discuss geometric issues including the parametrization of computational domains and its impact on quality of approximation, refinement techniques which allow to extend the function basis and to develop adaptive methods to improve efficiently the accurancy of approximation and geometric issues related to complex topologies. KEYWORDS Isogeometric analysis; geometric modeling; physical simulation; B-spline; T-spline; simplex spline. I. INTRODUCTION In engineer design and simulation of physical phenomena, geometry plays an important role. The shape of an object directly influences the functionalities that we expect from it. Consider just as examples the properler of a ship, the wing of a plane or the structure of a mechanical piece in a car engine. Its performances (force, drag, resistance) are directly related to its shape. To analyze and optimize these performances, numerical simulations are usually performed. In the design process, these objects are usually described by CAGD tools, which involve parametric non-linear models using bspline functions. But in the simulation process, usually surface or volume discrete meshes are used to approximate the solutions of partial

differential equations that describe the physical phenom- ena we want to analyse. This has two important consequences. Firstly, a conversion step is needed to go from one representation to another, which might deviate corresponding performance analysis. Secondly, this transformation needs to be tightly connected with design parameters when one want to optimize the geometry with respect to performance analysis. The topic of this paper is to give a brief overview of recent developments, which tackle these problems. The approach uses the same type of mathematical (piecewise non-linear) representation, both for the geometry and for the physical solutions, and thus avoid this costly forth and back transformations. Moreover its reduces the number of parameters needed to describe the geometry, which is of particular interest for shape optimisation. This approach was introduced by T. Hughes and his collaborators under the name of isogeometry in the context of PDE problems [1]. This uniform framework provides more accurate and efficient ways to deal with complex shapes and to approximate the solutions of physical simulation problems. But it also rises interesting geometric problems for the representation of shapes and functions on shapes that we will describe. To present the general idea of this approach, we simplify the context and consider a surface

patch of 3 on which we want to solve a

differential equations of the form Df x= 0 for x

with boundary conditions: N f x= f 0xfor x∂OMEGA.

Page 37: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

30

Instead of directly approximating the function f xon the domain ,

we first parametrize the physical domain by a computational domain D (here a rectangle) by a map : D

And then we compute the solution = f ° induced by partial

differential equations with boundary conditions on D through the map .

D Figure 1. Parametrization of the physical domain.

Therefore the solution f is defined implicitly

on by f x= °− 1x. This method naturally extends to cases where the physical domain is a volume parametrized by a

cube D in 3.

The isogeometric approach consists in choosing the same type of representation for the parametrization map and the actual

solution function . Because we are interested by representing geometry objects which are coming from CAGD, a natural choice is to use B-spline basis functions. In the next section, we will recall their definition and give their main properties. In the next section, we will show on an example, how this is done in practice. In section IV, we will discuss some of the geometric issues related to this approach before the concluding section. II.SPLINE REPRESENTATION Given a nondecreasing sequence of knots

= t0 , t1, , tht i≤ t i1

the B-spline basis of degree n can be defined using the Cox-de Boor recursion formula:

N j ,0t= 1 if t j≤ t t j1

0 otherwise

N j ,nt=t− t j

t jn− t j

N j , n− 1t

t jn1− t

t jn1− t j1

N j1,n− 1t

B-spline cuves can be defined as follows,

Pt=∑i= 0

m

Pi Ni ,nt

where Pi are called control points. B-spline surfaces and volumes can be defined in a tensor product way,

Pu ,v=∑i= 0

m

∑j= 0

l

Pi , j Ni ,nuN j ,nv

P u ,v ,w= ∑i , j ,k = 0

m, l ,q

Pi , j , k Ni ,nuN j ,nvN k ,nw

B-spline representation has many interesting properties, such as local support, partition of

unity, Cn− 1 continunity and refinement prop

erties [3], which are desired for numeric analysis. III. EXAMPLE In this section, we show how to solve a 3D heat conduction problem with boundary conditions, by using an isogeometric method.

Given a domain closed by the boundary ∂ , this physical problem can be described

as the following PDE, T x= f x in N T x= T 0x on ∂

where x are the Cartesian coordinates, T represents the temperature field. These boundary conditions that could be of Dirichlet or Neumann type are applied on the boundary ∂ of , T 0 being the imposed

temperature. f is a user-defined function that allows to generate problems with an analytical solution, by adding a source term to the classical heat conduction equation.

Page 38: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

31

According to a classical variational approach,

we seek for a solution T H 1 , such that T x= T 0xon ∂ and: −∫

T x xd

= ∫ f xxd 1

According to the isogemetric paradigm, the temperature field is represented using B-spline basis functions. For a 3D problem, we have:

T u ,v ,w= ∑i , j ,k = 0

m, l ,q

T i , j ,k N i ,nuN j , nvN k , nw

where Ni , n functions are B-spline basis

functions and p= u , v ,w D are domain parameters. Then, we define the test functions x in the physical domain bys: M ijk x= Nijk °− 1x

where

Nijk p= N i ,muN j ,lvN k , qw The weak formulation Eq. 1 reads:

∑r= 0

nr

∑s= 0

ns

∑t= 0

nt

T rst∫ M rstx M ijk xd

=−∫

f xM ijk xd

Finally, we obtain a linear system similar to that resulting from the classical finite-element methods, with a matrix and a right-hand side defined as: Eijk ,rst= ∫

Mrstx M ijk xd =

∫ p NrstpBT pB p p N ijk pJ pd P

Sijk= −∫

f xM rstxd

=−∫

f T pN rstpJ pdP

where J p is Jacobian of the

transformation, BT is the transposed of the

inverse of the Jacobian matrix. The above integrations are performed in parametric space using classical Gauss quadrature rules. An example is given in Figure 2 and Figure 3 with

f x= − 2

3sin

x3

sin y

3sin

z3

.

Figure 2 shows the physical domain and its boundary surfaces. Figure 3 presents the color map of the solution field and corresponding

3D control points. The solution value is represented by the color information.

Figure 2. Physical domain and its boundary surfaces.

IV. GEOMETRIC ISSUES The isogeometric approach provides a uniform framework to represent the geometry and the physical solutions. This simplifies significantly the computation process involved in a simulation or optimisation problem. But it also rises new questions, which are of geometric nature.

Figure 3. Color map of the solution and control points. Injectivity Condition. The first geometric issue is to guarantee the injectivity of the paramaterisation map : D . In a usual context, this physical domain is described by boundary curves or surfaces (see Figure 2); The computational domain needs to be parametrised such that the parametrisation coincides on the boundary of D , with the parametrisation of the boundary surfaces. Using B-spline representations, the control coefficients are known on the boundary. In Figure 3, we find that the outer control points of the volume parameterisation, wich are

Page 39: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

32

deduced from the control points of the boundary surfaces. The problem reduces to find the interior control points such that the map is a bijection between the computation domain D and the physical domain . The injectivity of is verified if its Jacobian does not vanish on D . A sufficient condition for injectivity can be deduced from the relative position of the control points of the parametrisation. For simple shapes, such a parameterisation can be constructed from so-called Coons pacthes [2]. For more complex shapes, a solution can be found using standard linear programming techniques on the position of the controlled points. Such an approach is described in [4]. As a matter of fact, the choice of the free inner control points has an influence on the quality of approximation of the physical solution. As shown in [4], the optimal position of the control points is not necessarily the natural (or regular) one. New types of strategies combining the optimisation of the position of the inner control points with approximation of the solution and the estimation of the error can be considered.

Figure 4. Error history during refinement for different parametrization of computational domain

Function space refinement. A standard and tranditional technique to improve the quality of approximation is to refine the computational domain. This process, also called h-refinement, consists to insert knots in the parametric domain, that is to add new control points. Note that it does not change the parametrisation map but increase the space of (B-spline) functions used to represent it. Thus it provides more freedom to better

approximate the solution of our problem. An interesting characteristic of these approximation schemes is that their order of approximation is directly related to the degree of regularity of the bspline space. Figure 4 is an example of a planar heat conduction problem analysis, where the error is given in terms of the log of the square root of the number of control points: The two curves represent the L2 error of the approximated solution, when we use a natural position of the control points (plain blue curve) and we when we optimize their position (dash red curve). We observe that in both cases, the slopes of the curves tend to -4, which is the speed of approximation we expect using bicubic bspline functions. Using tensor product B-spline functions has however some drawback in this context. When a knot is inserted in one direction of the parameter domain, we add not one but the number control points involved in the other parameter direction. This can lead to too many knot insertions if h-refinement operations are required in all parametric directions. Instead, we would like to have local parameter space refinements possibilities. To handle this problem, new types of B-spline basis are considered. So-called T-splines, introduced by T.W. Sederberg et al. [5], are a generalisation of rational bspline functions, which are associated to rectangular subdivisions of the parametric domain. These subdivisions allow to have T-junctions and to perform local refinements. They have interesting properties for the isogeometric approach [6]. Other types of T-splines which are piecewise polynomial [7] are also considered in isogeometric problems [8]. These spaces of T-splines are not completely under- stood. In particular, open questions remain on their dimension and the construction of explicit bases. Another family of B-spline functions is related to the triangular control meshes. These splines extend the concept of simplex splines to a triangular mesh, attaching a sequence of nodes to each vertex of the triangulation [9]. They allow to deal with 2D or 3D domains with arbitrary topology, and with arbitrary degree of regularity. Having the possibility to perform local refinement with these types of bspline functions is important in the isogeometric

Page 40: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

33

approach. To fully exploit these capacities, efficient local error estimators are however needed, which remains a difficult issue in numerical analysis, whatever the approach chosen to approximate the physical solution. Multipatches. The geometry on which we want to perform the simulation may not be composed of one part that can be parametrized by a simple domain D . It may have holes or different pieces assembled in a non-manifold

way: This type of geometry requires a special treat- ment for the description of the parametrisations of the different parts of the object and the constraints that should be satisfied along the boundary of these different components. The topological structure and the geometric and functional basis description should be tightly linked in order to provide an efficient solution to the simulation problem. In particular, assembling the (stiffness) matrix E should be optimized according to the support of the basis functions. V.CONCLUSION The isogeometric approach is a promosing technique which represents in the same framework the geometry and for the physical functions on the geometry. By representing exactly the geometry, it avoids some numerical artefacts that can appear in finite element method with mesh approximation. It also leads to high order numerical approximation scheme, using basis functions such as splines. These piecewise polynomials functions which are heavily used in CAGD provide a uniform framework to describe the geometry and the solutions. Tranditional finite element techniques extend naturally to this new framework. Shape optimisation methods can be applied more efficiently by moving control points instead of nodes on the finite element mesh.

This recent approach rises interesting geometric modeling and representation challenges, that need to be addressed for further impact of isogeometry in scientific computing. This also implies some deep changes in the numerical tools and techniques involved in numerical computation, which is another challenge to address. ACKNOWLEDGEMENTS We thank Dr. Régis Duvigneau for interesting discussions on finite element techniques and on shape optimisation. REFERENCES [1]. A.Cottrell, T.J.R.Hughes, Y.Bazilevs. Iso-

geometric Analysis: Toward Integration of CAD and FEA. Wiley Press, 2009.

[2]. Farin, D. Hansford. Discrete coons patches Computer Aided Geometric Design, 1999,16(7):691-700

[3] Farin. Curves and surfaces for computer aided geometric design : a practical guide. Comp. Science and sci. computing. Acad. Press, 1990.

[4]. G.Xu, B.Mourrain, R.Duvigneau, A. Galligo. Optimal analysis-aware parameterization of computational domain in isogeometric anal- ysis. Geometric Modeling and Processing, LNCS 6130, 2010: 236-254.

[5]. W. Sederberg, J.M Zheng, A.Bakenov, A. H. Nasri. T-splines and T-NURCCs. ACM Trans. Graph, 2003, 22(3): 477-484

[6]. Bazilevs, V.M. Calo, J.A. Cottrell, J.A. Evans, T.J.R Hughes, S. Lipton, M.A. Scott, T.W. Sederberg. Isogeometric Analysis using T-Splines, Computer Methods in Applied Mechanics and Engineering, 2010,Volume 199, Issues 5-8: 229-263.

[7]. X. Li, J.S Deng, F.L Chen. Surface modeling with polynomial splines over hierarchical T-meshes. The Visual Computer ,2007, 23(12): 1027-1033

[8]. Dörfel, B. Jüttler, and B. Simeon. Adap-

tive isogeometric analysis by local h-refinement with T-splines. Computer Methods in Applied Mechanics and Engineering,2010, 199(5-8): 264-275

[9]. G. Xu, G.Z Wang, X.D Chen. Free-form def-ormation with rational DMS-spline volumes. Journal of Computer Science and Technology, 2008,23(5): 862-873.

Page 41: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

34

FOURTH-ORDER QSMSOR ITERATIVE METHOD FOR THE SOLUTION OF ONE-DIMENSIONAL PARABOLIC PDE’S

1J. Sulaiman, 2M.K. Hasan, 3M. Othman, and 4S. A. Abdul Karim,

1 Mathematics with Economics Programme Universiti Malaysia Sabah, Locked Bag 2073,

88999 Kota Kinabalu, Sabah, Malaysia e-mail: [email protected]

2 Dept. of Industrial Computing, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor, Malaysia

e-mail: [email protected] 5 Dept. of Communication Technology and Network, Universiti Putra Malaysia,

43400 UPM Serdang, Selangor, Malaysia e-mail: [email protected]

4Fundamental and Applied Sciences Department, Universiti Teknologi Petronas, Bandar Seri Iskandar,

31750 Tronoh, Perak Darul Ridzuan, Malaysia. e-mail: 4 [email protected]

ABSTRACT Presently, many science and engineering problems have been described in mathematical models via partial differential equations. The solutions of these problems can be obtained analytically or numerically. In previous studies, the concept of quarter-sweep scheme has been pointed out to accelerate the convergence rate in solving any system of linear equations generated by using approximation equations of boundary value problems. Based on the same concept, The primary objective of this paper is to investigate the application the fourth-order finite difference solver using the Quarter-Sweep Modified Successive Over Relaxation (QSMSOR) iterative method to solve one-dimensional parabolic partial differential equations. Consequently, the fourth-order finite-difference approximation equation of proposed will be derived by using the Crank-Nicolson scheme. In fact, the formulation and implementation of Full-Sweep, and Half-Sweep Successive Over Relaxation iterative methods, namely FSSOR and HSSOR respectively are also presented. Throughout implementations of some numerical experiments conducted, it has shown that the fourth-order solver using the QSMSOR method is superior as compared with other SOR methods. KEYWORDS Quarter-Sweep Iterative, SOR Scheme, Fourth-Order Finite Difference, 1D Parabolic PDE’s I. INTRODUCTION

Science and engineering problems can always being modelled by differential

equations mainly in partial differential equations. Recently many researchers have proposed low- and high-order approximation equations obtained by discretization of existent differential equations in mathematical models. Both second- and fourth-order difference schemes will lead to a linear system with different properties of their coefficient matrix. For case of high-order, formulations on the fourth-order approximation equations have been proposed and discussed by Spotz [10] and Gupta et al. [4,5] in order to get more accurate approximate solutions. Each high-order approximation equation will also lead a system of linear equations, where the characteristic and complexity of its coefficient matrix are different. Instead of standard and compact fourth-order schemes, a new fourth-order difference method so called, arithmetic average discretization has been presented recently for the solution of two dimensional non-linear singularly perturbed elliptic partial differential equations, see in Mohanty [7, 8] and Mohanty and Singh [9].

The main purpose of this paper is to examine the efficiency of the fourth-order quarter-sweep finite difference method for solving one-dimensional diffusion equations. Besides the quarter-sweep approach, formulation of the full- and half-sweep second order schemes also need to be considered for implementing the quarter-sweep iterations. This is because of implementations of the

Page 42: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

35

quarter-sweep iterative algorithms will be performed together with the full- and half-sweep approaches. In this paper, we also examine the Quarter-Sweep Modified Successive Over Relaxation (QSMSOR) method for solving one-dimensional diffusion equations by using the quarter-sweep fourth-order finite difference approximation equation.. This QSMSOR method will be compared to the FSSOR, HSSOR and QSSOR methods.

To investigate the effectiveness of the QSMSOR method based on the fourth-order discretization scheme, let us consider the one-dimensional diffusion equation as given by

,Tt0,bxa,x

UtU

002

2

(1) subject to the initial condition ,bxa,xgt,xU 001

and the boundary conditions Tt0

tgt,bUtgt,aU

20

20

where α is a diffusion parameter. Before constructing the formulation of the

finite difference approximation equation in case of the full-, half-, and quarter-sweep iterations over the problem (1), we assume the solution domain (1) can be uniformly partitioned into (n+1) and M subintervals in the x and t directions. The subintervals in the x and t directions are denoted x and t respectively, which are uniform and defined as

M0Tt

1nm,hm

abx 00

(2)

II. Formulation of Fourth-Order Quarter-Sweep Finite Difference Approximation

Referring in Figure 1 and to facilitate us for implementing full-, half-, and quarter-sweep iterations, we firstly need to construct three finite grid networks in order to derive the full-, half-, and quarter-sweep finite difference approximation equations by discretizing the problem (1). Actually, these networks show us on implementation of the full-, half-, and

quarter-sweep iterative algorithms applied onto the solid node points of type only until iterative convergence is achieved. Then other approximate solutions at remaining points (points of the different type) will be obatined by using the direct method, see Abdullah [1], Ibrahim and Abdullah [6], Sulaiman et al. [11, 13].

Because of implementations of the full-, half-, and quarter-sweep iterations involve the solid node points of type only, it can be obviously seen that the implementation of the half- and quarter-sweep iterative methods just involves approximately 50% and 25% of the whole inner points as shown in Figures 1b and 1c compared to the full-sweep iterative method in Figure 1a.

a).

b).

c).

Figure 1. a)., b).and c). show the distribution of uniform node points for the full-, half-, and quarter-sweep cases

respectively. By using the θ iterated scheme, the general second-order approximation equation for problem (1) can be easily given as

1j,i1j,1i

1j,i1j,1i

fU

U21U

(3)

where,

j,1ij,i

j,1i1j,i

U1U121

U1f

,

2ht

.

The values of θ in Eq. (3), correspond to 0, 1, and 21 , indicates the explicit (classical), fully implicit and the Crank-Nicolson (CN) schemes, respectively. In fact, the truncation errors for each scheme can be shown as txO 2 , txO 2 ,

Page 43: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

36

and 22 txO , respectively. Due to high accuracy, the second-order CN scheme is mainly considered to derive an approximation equation of problem (1). Similarly using the same way to derive Eq. (3) and by substituting

21 , we can construct the Full-Sweep, Half-Sweep and Quarter-Sweep Crank-Nicolson finite difference approximation equations, which are indicated as the FSCN, HSCN, and QSCN respectively. The formulation of all CN finite difference approximation equations at the (j+1) time level, can generally be expressed as

1j,i1j,pi

1j,i1j,pi

fU2

U1U2

(4)

Where

,

pht2

j,pij,ij,pi1j,i U2

U1U2

f

By following the same steps to construct second-order CN approximation equation in (4), we can show on how to derive that the fourth-order full-sweep, half-sweep and quarter-sweep finite difference approximation equations and then these approximation equations can generally be stated as

1j,i1j,pi1j,pi

1j,i1j,pi1j,pi

fcUbU

aUbUcU

(5) Where

j,pij,pi

j,ij,pij,pi1j,i

cUbU

dUbUcUf

,

ph24t16b,

ph24tc

22

22 24

301,24

301ph

tdph

ta

.

Figure 2. Distribution of node points for the

computational molecule of the fourth-order full-sweep and half-sweep schemes at pi .

The value of p, which corresponds to 1, 2 and 4, represents the full-sweep, half-sweep and half-sweep cases respectively. Based on Figure 2, it can be seen that we can not calculate approximate values for two node points, pi and pmi by using Eq.(5), because some node points are out of the solution domain (1). To overcome this problem, we need to consider the correspoding second-order approximation equation in Eq. (4). It means that both node points will be calculated by Eq. (4). Therefore, combination between Eq. (4) and (5) can easily lead to a linear system in matrix form as

~~bUA (6)

where,

1pmx1

pm

120babccbabc

cbabccbab

021

A

III. Derivation of the Family of SOR Methods

As can be seen the linear system in Eq. (6), the coeiffient matrix, A is sparse and large scale. Therefore iterative methods are the natural options for efficient solutions of sparse linear system. In this section, we will discuss on how to construct various types of the Successive over-relaxation (SOR) method introduced by Young [16]. Actually, this method can be also called the Gauss-Seidel method with a weighted parameter, w is widely used to accelerate the convergence rate of the standard Gauss-Seidel method for solving a linear system of equations. Now, we construct and formulate FSSOR, HSSOR, QSSOR and QSMSOR iterative methods based on linear system in Eq. (6). Since the linear system can be represented in a general matrix form for the case of full-sweep, half-sweep and quarter-sweep. Let us consider linear system in Eq. (6) as

Page 44: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

37

~~bUA (7)

Then we need to decompose the coefficient matrix, A in Eq. (7) as follow TDLA where L, D and T are lower triangular, diagonal and upper triangular matrices respectively. Therefore, we can write the general scheme for all SOR methods as follow (Young [16, 17, 18, 19])

~k

~11k

~bwUDw1wTwLDU (8)

where w and k~U represent as a relaxation

factor and an unknown vector at the kth iteration respectively. The choice of relaxation factor depends upon the properties of the coefficient matrix, A. In addition, a good choice of parameter can improve the convergence rate of iteration process. In practice, the optimal value of w in range

2w1 will be obtained by implementing several computer programs and then the best approximate value of w is chosen in which its number of iterations is the smallest. However, a theoretical optimal value of w can be calculated by the formula:

2opt11

2w

where is the spectral radius of the iteration matrix. Actually the formulation of the Modified Successive Over-Relaxation (MSOR) is the same with Successive Over-Relaxation (SOR) method by imposing the concept of red black ordering with the two different relaxation parameter factors such as "w" for the “red”

equation and “ 'w ” relaxation parameter factor for “black equation”. As a result, the iterative scheme for QSMSOR method is given by

~k

~11k

~

~k

~11k

~

bwUDw1TwLwDU

bwUDw1wTwLDU(9)

If choosing 1w , we get simple Gauss-Seidel (GS) method. IV. Computational Experiments

In this section, we have conducted some numerical simulations in order to examine the effectiveness of the QSMSOR method by using the fourth-order QSCN finite difference

approximation equation in (4). There are three items will be considered in comparison such as the number of iterations, execution time and maximum absolute error. Some numerical experiments were demonstrated in solving the following one-dimensional diffusion equation as follows

0.1t0,1x0,x

UtU

2

2

(10)

The initial and boundary conditions and exact solution of the problem (10) are given by

x2sine3xsine)t,x(Ut24t2

All results of numerical experiments, which were gained from implementations of the FSSOR, HSSOR, QSSOR and QSMSOR methods have been tabulated in Table 1. In implementations mentioned above, the convergence test considered the tolerance error

1010 . Figures 3 and 4 show number of iterations and execution time against mesh size respectively. Through numerical solutions obtained in Table 1, it clearly shows that applications of the half- and quarter-sweep iterative methods reduces number of iterations and computational time as as compared to FSSOR iterative method (refer Table 2 and Figures 3 and 4).

Table 1. Comparison of a number of iterations, the execution time (seconds) and maximum errors for the

iterative methods. No. of Iterations

Methods Mesh size 1024 2048 4096 8192 16384

FSSOR 663 1224 2245 4185 7927 HSSOR 356 663 1224 2245 4185 QSSOR 192 356 663 1224 2245 QSMSOR 120 225 417 772 1414

Execution time (Seconds) Methods Mesh size

1024 2048 4096 8192 16384 FSSOR 5.19 20.03 77.85 288.91 1064.61 HSSOR 1.19 4.42 17.32 66.82 247.70 QSSOR 0.56 1.89 7.32 28.10 108.08 QSMSOR 0.26 0.77 2.83 10.62 40.48

Maximum Absolute Errors Methods Mesh size

1024 2048 4096 8192 16384 FSSOR 4.12e-7 4.13e-7 4.23e-7 5.54e-7 9.58e-7 HSSOR 4.13e-7 4.13e-7 4.13e-7 4.23e-7 5.54e-7 QSSOR 4.13e-7 4.13e-7 4.13e-7 4.13e-7 4.23e-7 QSMSOR 4.01e-7 3.89e-7 3.65e-7 3.17e-7 2.26e-7

Page 45: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

38

Figure 3. Number of iterations versus mesh size of the

FSSOR, HSSOR, QSSOR and QSMSOR methods.

Figure 4. The execution time (seconds) versus mesh size of the FSSOR, HSSOR, QSSOR and QSMSOR methods.

Table 2. Reduction percentage of the number of iterations and execution time for the HSSOR, QSSOR

and QSMSOR methods compared with FSSOR method

Methods Problem (1)

Number of iterations

Execution time

HSSOR 45.47 - 47.21% 76.73 - 77.93% QSSOR 70.47 - 70.92% 89.21 - 90.60%

QSMSOR 81.43 - 82.16% 94.99 - 96.36% V. CONCLUSIONS

In this paper, we present the formulation of the the FSSOR, HSSOR, QSSOR and QSMSOR methods by using the corresponding fourth-order Crank-Nicolson finite difference schemes in Eq. 5. Based on fourth-order discretization schemes for all sweep cases, we can see that the corresponding second-order approximation equation needs to be applied together at two node points, pi and

pmi . From the observation of numerical collected in Tables 1 and 3, we can conclude that the QSSOR method is more superior in terms of number of iterations and the

execution time than the FSSOR, HSSOR, or QSSOR method. This is as a result of the computational complexity of the QSSOR method is approximately 75% less than of the FSSOR method. For future works, the capability of the proposed technique will be examined for the further investigation in solving any multi-dimensional partial differential equation (Evans & Sahimi [2]; Abdullah [1]; Ibrahim & Abdullah [6]; Sulaiman et. al [12] and being used as a smoother in multigrid solvers, see in Evans and Yousif [3]; Othman and Abdullah [15]; Sulaiman et. al [14].

REFERENCES [1]. Abdullah, A.R. (1991). ”The Four Point

Explicit Decoupled Group (EDG) Method: A Fast Poisson Solver”. International Journal Computer Mathematics, Vol 38, pp 61-70.

[2]. Evans, D.J & Sahimi, M.S. (1988). ”The Alternating Group Explicit iterative method (AGE) to solve parabolic and hyperbolic partial differential equations”. Ann. Rev. Num. Fluid Mechanic and Heat Trans, Vol 2, pp 283-389

[3]. Evans, D. J & Yousif, W. S. (1990). ”The Explicit Block Relaxation method as a grid smoother in the Multigrid V-cycle scheme”. Int. J. Computer Maths. , Vol 34, pp 71-78.

[4]. Gupta, M. M., Kouatchou, J. & Zhang, J. (1997a). ”A Compact Multigrid solver for convection-diffusion equations”. Journal of Computational Physics, Vol 132, No 1, pp 123-129.

[5]. Gupta, M. M., Kouatchou, J. & Zhang, J. (1997b). ”Comparison of second and fourth order discretizations for multigrid Poisson solvers”. Journal of Computational Physics, Vol 132, No 2, pp 226-232.

[6]. Ibrahim, A. & Abdullah, A.R. (1995). ”Solving the two-dimensional diffusion equation by the four point explicit decoupled group (EDG) iterative method”. International Journal Computer Mathematics, Vol 58, pp 253-256.

[7]. Mohanty, R.K. (1997). ”Order h4 difference methods for a class of singular two space elliptic boundary value problems”. Journal of Computational

Page 46: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

39

and Applied Mathematics, Vol 81, pp 229-247

[8]. Mohanty, R.K. (2007). ”The smart-BLAGE algorithm for singularly perturbed 2D elliptic partial differential equations”. Applied Mathematics and Computation, Vol 190, pp 321–331.

[9]. Mohanty, R.K. & Singh, S. (2006). ”A new fourth order discretization for singularly perturbed two dimensional non-linear elliptic boundary value problems”. Applied Mathematics and Computation, Vol 175, pp 1400–1414.

[10]. Spotz, W. F. (1995). ”High-Order Compact Finite Difference Schemes for Computational Mechanics”. Ph.D Dissertation. The University of Texas at Austin.

[11]. Sulaiman, J., Hasan, M.K. & Othman, M. Othman. (2007). ”Red-Black EDGSOR Iterative Method Using Triangle Element Approximation for 2D Poisson Equations”. In. O. Gervasi & M. Gavrilova (Eds). Computational Science and Its Application 2007. (LNCS 4707), pp 298-308. Berlin: Springer-Verlag.

[12]. Sulaiman, J., Hasan, M.K. & Othman, M.(2007). ”Red-Black Half-Sweep Iterative Method Using Triangle Finite Element Approximation for 2D Poisson Equations”. In. Y. Shi et al. (Eds). Computational Science 2007. (LNCS 4487), pp 326-333. Berlin: Springer-Verlag.

[13]. Sulaiman, J., Othman, M. & Hasan, M.K. (2004). ”Quarter-Sweep Iterative Alternating Decomposition Explicit algorithm applied to diffusion equations”. International Journal of Computer Mathematics, Vol 81, No 12, pp 1559-1565.

[14]. Sulaiman, J., Othman, M. & Hasan, M.K. (2008). ”Half-Sweep Algebraic Multigrid (HSAMG) method applied to diffusion equations”. 2008. In. H.G. Bock et al. (Eds). Modeling, Simulation and Optimization of Complex Processes, pp 547-556. Berlin: Springer-Verlag.

[15]. Othman, M. & Abdullah, A.R. (1999). ”An Effcient Multigrid Poisson Solver”. Int. J. Computer Maths., Vol 71, pp 541-553.

[16]. Young, D. M. (1954). ”Iterative Methods for solving Partial Difference Equations

of Elliptic Type”, Trans. Amer. Math. Soc., Vol 76, 92-111.

[17]. Young, D.M. (1971). ”Iterative solution of large linear systems”. London: Academic Press.

[18]. Young, D.M. (1972). ”Second-degree iterative methods for the solution of large linear systems”. Journal of Approximation Theory, Vol 5, pp 37-148.

[19]. Young, D.M. (1976). ”Iterative solution of linear systems arising from finite element techniques”. In: Whiteman, J.R. (Eds.). The Mathematics of Finite Elements and Applications II, pp 439-464. London: Academic Press.

Page 47: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

40

A Parallel Accelerated Over-Relaxation Quarter-Sweep Point Iterative Algorithm for Solving the Poisson Equation

Mohamed Othman,(1,2) Shukhrat I. Rakhimov,(2) Mohamed Suleiman(2) and Jumat Sulaiman(3)

(1)Dept of Communication Tech and Network,

Universiti Putra Malaysia, 43400 UPM Serdang, Selangor D.E., Malaysia (Email: [email protected]) (2)Institute for Mathematical Research,

Universiti Putra Malaysia, 43400 UPM Serdang, Selangor D.E., Malaysia (Email: [email protected])

(3) School of Science and Technology, University Malaysia Sabah, 88999 Kota Kinabalu, Sabah, Malaysia

(Email: [email protected])

ABSTRACT Over-relaxation methods are proved to be efficiently solve a large system of linear equations, arising from discretizations of scientific and engineering problems. Recently, a new accelerated over-relaxation (AOR) quarter-sweep point-iterative method was shown to be effective in solving the Poisson equation, as compared to full- and half-sweep approach. In this paper, we present the parallel implementation of the AOR quarter-sweep method on a distributed memory architecture machine. To confirm its effectness, the experimental results of a test problem which are recoreded and analyzed, show that the parallel AOR quater sweep algorithm is superior as compared to the previous parallel AOR full- and half-sweep algorithms. KEYWORDS Accelerated over-relaxation, point iterative methods, Poisson equation, distributed memory architecture, parallel algorithm. I. INTRODUCTION Many scientific and engineering problems can be modeled mathematically as differential equations. The numerical techniques are used to solve a large size problem since the rapid growth of computer technology. In the case of stationary type of problem, two dimensional Poisson's equation is the most popular type of equation and represented as follows:

2 2

2 2 ,u u f x yx y

(1)

with Dirichlet boundary conditions: ( , ) ( , )u x y g x y and ( , )x y . For

simplicity, we assume the solution domain to be a square unit [0 , 1]x y .

A number of numerical methods have been formulated to solve the equation. Most of the methods are based on the concept of Successive Over-Relaxation (SOR), introduced by Young [27], and Accelerated Over-Relaxation (AOR), introduced by Hadjidimos [23]. Based on SOR, the parallel full-sweep point-iterative algorithm for solvign large sparse linear systems was invented by Evans [22]. The half-sweep approach for the derivation of Explicit De-coupled Group method was introduced by Abdullah, see [20]. Subsequently, parallel implementation of the EDG method was developed by Yousif et al. [28]. The SOR quarter-sweep point/group-iterative methods, which were shown to be superior over the full-sweep and half-sweep point iterative methods, was introduced by Othman et al. [24]. At the same time, the concept of AOR was applied to full- and half-sweep approaches by Ali et al. [21]. In a recent paper [26], authors had developed the AOR quarter-sweep point iterative method, which was shown to be more efficient than the AOR methods with full- and half-sweep approaches. In this paper we introduce the parallel implementation of the AOR quarter-sweep point iterative method, based on distributed memory architecture machine. II. THE AOR QUARTER-SWEEP POINT ITERATIVE METHOD In the AOR quarter-sweep point iterative method, the domain points are divided into three types: , , and , as shown in Figure 1.

Page 48: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

41

Figure 1. The solution domain for the AOR quarter-

sweep point-iterative method. The method is based on the five points quarter-sweep finite difference scheme,

22, 2, , 2 , 2 , ,4 4i j i j i j i j i j i jv v v v v h f (2)

The algorithm of the method is as follows. 1. Divide the grid points into the three types

as in Figure 1. Compute the values of 2h , 22h , and 24h beforehand and assign

them to variables H , I , and J . 2. For all the points of type , implement

the accelerated over-relaxation:

,

( 1) ( ) ( 1) ( )2, 2, , 2 , 2

( 1) ( ) ( ), , ,

4

,

i j

k k k ki j i j i j i j

k k ki j i j i j

v rv v v v

v v v

where

( 1) ( 1) ( 1) ( ) ( ), 2, , 2 2, , 2 ,

14

k k k k ki j i j i j i j i j i jv v v v v J f

3. Check for the convergence. If the convergence is not achieved, go to the step 2. Otherwise, evaluate solutions for the other points by the following formulas:

, 1, 1 1, 1 1, 1 1, 1 ,14i j i j i j i j i j i jv v v v v I f

for points of type and

, 1, , 1 1, , 1 ,14i j i j i j i j i j i jv v v v v H f

for points of type .

4. Stop.

III. THE PARALLEL AOR QUARTER-SWEEP POINT METHOD In [26], it was shown that the chessboard ordering strategy is the most optimal strategy for the new method. Therefore, the strategy is used in the parallel implementation. A. Domain Decomposition Since the points in the solution domain are interdependent, the domain decomposition mechanism is used to solve the problem of data dependent at the interboundary. The number of points at the interboundary must be very minimal. Let’s denote the number of processes p and the solution domain is decomposed evenly into p vertical sub-domains, as shown in Figure 2.

B. Interprocess Communication In chessboard ordering strategy, there are two stages, red and black. The stages depend on each other, update the data for each stage are required, i.e. twice for for each iteration. Only the points at the interboundaries are required i.e. each sub-domains are required to update their data (communications among the processors). The interprocess communication procedure is called twice for each iteration, red iteration then followed by black iteration. C. Parallel AOR quarter-sweep Algorithm

Figure 2. Domain decomposition for the parallel AOR quarter-sweep method with n = 18 and two processors

(p=2).

Page 49: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

42

The implementation of the parallel AOR quater sweep points algorithm is describe as, 1. Divide the grid points into the four types

, , and as in Figure 2. Compute the values of 2h , 22h , and 24h beforehand and assign them to variables H , I , and J . Decompose the domain into sub-domains.

2. For all the points of type , implement the accelerated over-relaxation:

,

( 1) ( ) ( 1) ( )2, 2, , 2 , 2

( 1) ( ) ( ), , ,

4

,

i j

k k k ki j i j i j i j

k k ki j i j i j

v rv v v v

v v v

where

( 1) ( 1) ( 1) ( ) ( ), 2, , 2 2, , 2 ,

14

k k k k ki j i j i j i j i j i jv v v v v J f

3. Call the interprocess communication procedure, sending the undated points at the interboundaries to the adjacent processors.

4. Repeat Steps 2−3 for the points of type . 5. Check for the convergence. If

the convergence is not achieved, go to the step 2. Otherwise, evaluate solutions for the other points by the following formulas:

, 1, 1 1, 1 1, 1 1, 1 ,14i j i j i j i j i j i jv v v v v I f

for points of type and

, 1, , 1 1, , 1 ,14i j i j i j i j i j i jv v v v v H f

for points of type .

6. Stop. IV. NUMERICAL RESULTS AND DISCUSSIONS To benchmark the parallel AOR quarter-sweep point iterative algorithm, we have implemented parallel version of the AOR full- and half-sweep point iterative methods. The numerical experiments were conducted for solving the following problem,

2 22 2

2 2 ,

( , ) 0,1 0,1

xyu u x y ex yx y

with the Dirichlet boundary conditions. The exact solution of the problem is ( , ) xyu x y e for ( , )x y . The experiments were carried out with several mesh size such as

150, 250, 350 and 450. The convergence test was the maximum absolute error with the error tolerance 1010 . The parallel algorithm was implemented in C language on Sun Fire V1280 parallel machine with eight numbers of processor. The results for the parallel AOR quarter-sweep point iterative method are reported and shown in Table 1.

Table 1. Experimental results for the parallel AOR quarter-sweep point iterative method.

n p Time (ms) Speedup Effic.

150

1 511.864 1.00 1.00 2 308.179 1.66 0.83 3 235.346 2.17 0.72 4 194.690 2.63 0.66 5 179.336 2.85 0.57 6 165.988 3.08 0.51 7 157.103 3.26 0.47 8 152.881 3.35 0.42

250

1 2365.096 1.00 1.00 2 1358.951 1.74 0.87 3 972.199 2.43 0.81 4 770.047 3.07 0.77 5 682.750 3.46 0.69 6 603.724 3.92 0.65 7 552.556 4.28 0.61 8 506.630 4.67 0.58

350

1 6120.029 1.00 1.00 2 3448.256 1.77 0.89 3 2426.846 2.52 0.84 4 1904.208 3.21 0.80 5 1655.956 3.70 0.74 6 1429.563 4.28 0.71 7 1292.324 4.74 0.68 8 1147.156 5.33 0.67

450

1 13 289.598 1.00 1.00 2 7 456.084 1.89 0.94 3 5 212.513 2.70 0.90 4 3 990.711 3.59 0.90 5 3 287.753 4.48 0.90 6 2 832.822 5.28 0.88 7 2 511.695 6.25 0.89 8 2 427.652 7.15 0.89

The results of execution time for all the parallel AOR full-, half- and quarter-sweep point iterative algorithms for are shown in Figure 3.

Page 50: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

43

Figure 3. Execution times for the parallel AOR full-, half-, and quarter-sweep point iterative methods (n=450). It is shown that the parallel AOR quarter-sweep method is the fastest among the three algorithms at any number of processes run to solve the problem. V. CONCLUSIONS In this paper, we discussed the implementation of the parallel AOR quarter-sweep point iterative algorithm. The algorithm has shown good speedup and efficiency, especially for the large size of grid. Again, the algorithm has shown their superiority over the previous parallel AOR full- and half-sweep algorithms. ACKNOWLEDGEMENT The research was supported by Fundamental Research Grant Scheme (FRGS), Grant No. 02-01-07321FR under the Ministry of High Education of Malaysia. REFERENCES [20]. Abdullah, AR (1991). "The Four Point

Explicit Decoupled Group (EDG) Method: A Fast Poisson Solver." International Journal of Computer Mathematics, 38:61−70.

[21]. Ali, NHM, and Chong, LS (2007). Group Accelerated OverRelaxation Methods on Rotated Grid. Applied Mathematics and Computation, 191(2):533−542.

[22]. Evans, D.J., (1984). Parallel S.O.R. iterative methods. Parallel Computing 1: 3-18.

[23]. Hadjidimos, A (1978). Accelerated OverRelaxation Method. Mathematics of Computation, 32(141):149−157.

[24]. Othman, M, and Abdullah, A.R., (2000). An Efficient Parallel Quarter-sweep Point Iterative Algorithm for Solving Poisson Equation on SMP Parallel Computer, Pertanika Journal of Science and Technology, 8(2):161−174.

[25]. Othman, M, Abdullah, A.R., and Evans, D.J., (2004). A Parallel Four Points Modified Explicit Group Algorithm on Shared Memory Multiprocessors. International Journal of Parallel, Emergent and Distributed Systems, 19(1):1−9, 2004.

[26]. Rakhimov, ShI, and Othman, M. (2009). An Accelerated Over-Relaxation Quarter Sweep Point Iterative Method For Two Dimensional Poisson Equation, Sains Malaysiana, 38(5), pp. 729−733.

[27]. Young, D.M. (1954). Iterative Methods for Solving Partial Difference Equations of Elliptic Type. Transactions of American Mathematical Society, 76:92−111

[28]. Yousif, W.S., and Evans, D.J. (1995). Explicit De-coupled Group Iterative Methods and Their Parallel Implementations. International Journal of Parallel, Emergent and Distributed Systems, 7:53−71, 1995.

Page 51: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

44

Value-at-Risk (VaR) using ARMA(1,1)-GARCH(1,1)

Sufianti(1) and Ukur A. Sembiring(2) (1)Department of Mathematics, Universitas Pelita Harapan, Tangerang, Banten, Indonesia

Email: [email protected] (2)Department of Mathematics, Universitas Pelita Harapan, Tangerang, Banten, Indonesia

Email: [email protected]

ABSTRACT With the economic and social world so uncertain today, risk management becomes an essential suit of armor for many corporate entities. One of the widely used tools in managing risk is Value-at-Risk (VaR). The main concern in this paper is ARMA(1,1)-GARCH(1,1) model, developed for VaR calculation. From the derivation and simulation, ARMA(1,1)-GARCH(1,1) model with related assumptions is shown to be able to describe several financial data's characteristics adequately and is applicable to real data. Additionally, from backtesting results, VaR calculation using ARMA(1,1)-GARCH(1,1) model exemplifies a better performance than the conventional one, in terms of capital reservation for market risk coverage. KEYWORDS Value-at-Risk; ARMA(1,1)-GARCH(1,1); risk management; risk factor; backtesting I. INTRODUCTION In financial market, value of an instrument always changes over time, due to risk factors, such as inflation, natural disasters, or government policies. Therefore, portfolio holders will be exposed to a risk or the uncertainty of portfolio’s return. Several models have been developed to make risk measurable, so that it can be managed according to portfolio manager’s risk appetite. One of the methods for managing risk is by risk valuation using Value-at-Risk (VaR). Until today, VaR has been widely used as a reliable risk valuation method and possesses an advantage in its ability to consolidate all risk factors exposing a portfolio. VaR defines the potential loss of a portfolio for a certain time horizon with a specific confidence level. Knowing the VaR value for a portfolio will considerably

help risk managers in managing portfolio’s market risk. The aim of this paper is to introduce daily VaR calculation method using ARMA(1,1)-GARCH(1,1) model for a portfolio, which consists of two risk factors, with related assumptions through model derivation and simulation using real data. Real data, in form of share’s closing price from arbitrary companies listed in New York Stock Exchange (NYSE), will represent the major risk factors of a portfolio in the simulation. In addition, a comparison between VaR value calculated with this model and the conventional one using backtesting method will be conducted to show model’s advantage over the other. II. VAR CALCULATION FOR

PORTFOLIO WITH TWO RISK FACTORS

1. Model Derivation One-period, discretely sampled return, , can be written as a total of conditional expected return, , and innovation or arbitrary component, , with zero mean and conditional variance :

(1) where is independent white noise process, and is the log return. Two assumptions will be used : 1. is normally distributed (or is a

Gaussian white noise); 2. is not constant.

Page 52: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

45

Using the first assumption results in a conditionally normally distributed return:

(2) The second assumption is used as the basis in ARMA(1,1) model selection for modeling the return :

(3) and the volatility in the return equation,

,will be modeled by GARCH(1,1) :

(4) GARCH(1,1) is chosen for modeling volatility because of its ability to describe several financial data’s characteristics, such as the leptokurtic distribution of return, volatility clustering, stationarity, and volatility mean reversion. After setting up the return and volatility equations, the parameters in Eqs. 3~4 will be estimated simultaneously using maximum likelihood estimation method (mle). Let be the vector containing parameters from ARMA(1,1)-GARCH(1,1) model and be the set of prior information. Given , the likelihood function

can be written as a product of conditional densities :

(5) with . Because returns are conditionally normally distributed, the conditional density of the vector is

(6)

By Eqs. 5~6, the log-likelihood function is

(7) with and are functions of .

To obtain using mle method, the following equation must be solved :

(8)

Because the log-likelihood function is nonlinear in its parameters, Eq. 8 can only be solved using numerical computation. After getting the parameters, the next step is to calculate 1-step ahead forecast of conditional mean and volatility of using

(9)

and by Eq. 2, . The previous steps of setting equations and calculating forecasts should be done for each of the two risk factors. Afterwards, the conditional mean and volatility of the portfolio can be calculated using

(10)

Since VaR is the α-th quantile of the cumulative distribution function of return and

, VaR of the portfolio is defined as

(11)

Page 53: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

46

where is the confidence level and the value of can be acquired from the standard normal distribution table. The VaR calculated from the quantile of the distribution of given information available at time t is in percentage. The dollar amount of VaR is then the cash value of the financial position times the VaR of the log return series. That is, using a time horizon of 1 day, the daily VaR of the portfolio for a position of is

(12) Here, VaR symbolizes the worst loss over 1 day horizon for the position held in portfolio. 2. Simulation In this subsection, simulation of VaR calculation for portfolio with two risk factors will be done using real data. The closing price of Arch Coal, Inc. (ACI) and Goldman Sachs Group, Inc. (GS) in 2002-2005 will be used. Before using the data, several tests and inspections must be conducted to ensure that they satisfy all assumptions and conditions for ARMA(1,1)-GARCH(1,1) model usage. Inspections using ACF, PACF, and Normal QQ plot, and quantification tests, such as Ljung-Box Q-test, Engle’s ARCH test, and Kolmogorov-Smirnov test will be applied. After making sure that the data can be used, the closing price for each risk factor is converted into log return.

Figure 1. Log return of ACI and GS (2002-2005)

The next stage is to construct the return and volatility equations by means of parameter

estimation. Parameter estimation is conducted using one of the tools in GARCH toolbox, garchfit, provided by MATLAB. The equations acquired: Arch Coal, Inc. (ACI)

(13) Goldman Sachs Group, Inc. (GS)

(14)

The following step is to check whether the models acquired are the suitable ones using the same inspections and tests as before. Once the models are approved, 1-step ahead forecast of conditional mean and volatility for ACI and GS are calculated using Arch Coal, Inc. (ACI)

(15) Goldman Sachs Group, Inc. (GS)

(16) Next, the covariance matrix of ACI and GS is computed and shown below

(17)

Using Eqs. 15~16, the conditional mean and volatility for both risk factors can be calculated. By investing $1 in the portfolio with 30% in ACI and 70% in GS, the conditional mean and volatility of the portfolio is valuated. The following table presents the results :

Table 1. Conditional mean and volatility ACI GS Portfolio

Page 54: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

47

0.0015 0.00043953 0.00076012

0.0209 0.0145 0.0134

Thus, with a 99% confidence level, the daily VaR of the portfolio is

(18)

3. Backtesting VaR Value-at-Risk or VaR calculation using ARMA(1,1)-GARCH(1,1) model derived in the previous subsection is useful only if it can predicts loss accurately. Therefore, its ability in predicting loss needs to be compared with the conventional one. The conventional model can be defined as

(19) where is the standard deviation of the historical data. Both models performances will be compared using backtesting method. Backtesting is a formal statistical technique used for observing whether the VaR forecast is able to describe the real loss over the given time horizon. Historical data of the log return of ACI and GS for 500 days will be used and daily VaR forecast will be done using both models for the following 250 days, with a position of $1 and confidence level of 99%. Both VaR forecasts will be compared to the hypothetical P&L from historical data. If there is a case where VaR is underestimated or the loss is higher than VaR, it is said to be an exception or violation towards VaR. If there is too many exceptions, then the model applied is not suitable enough for the portfolio in question.

Figure 2. Backtesting for both VaR calculation models

Ideally, for a time horizon of 250 days and confidence level of 99%, the expected number of exceptions are 2.5 exceptions. From the backtesting result, it can be seen that both models produce only one exception. According to Basel Committee for Bank Supervision (BCBS) (1996), an accurate model will fall in the green zone. Yellow zone indicates that there is a problem in the model. If a model falls in the red zone, it will automatically lead into an assumption of incorrect or inaccurate model.

Table 2. Basel Traffic Light (confidence level = 99%)

Zone Number of exception

Increase in k

Green 0-4 0.00 Yellow 5 0.40

6 0.50 7 0.65 8 0.75 9 0.85

Red 10+ 1.00 From Fig. 2, both models fall in the green zone, thus both models are accurate for the portfolio in question.

VaR using ARMA(1,1)-GARCH(1,1)is better than the conventional one

To show that the first model is better, the formula of Capital Adequacy Ratio (CAR) will be used.

Page 55: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

48

(20) where CR = Credit Risk OR = Operational Risk MR = Market Risk = k = multiplicative factor (See Table 2) CAR is used by Basel Committee to regulate the minimum amount of capital reserve for a bank as stated in Basel II Accord. This rule ensures that a bank will have enough capital reserve to cover risk exposed to the bank. For higher risk, more capital will be needed as backup and vice versa. From Eq. 20, it can be seen that with the same amount of capital, higher VaR value will result in higher MR which ends up in lower CAR value. In relation with Basel Traffic Light, if the model falls in yellow or red zone, along with the increase of the multiplicative factor k, CAR value will also be lower. If , bank with such CAR value will be asked to increase its capital reserve. When the bank is unable to do it, it can be closed down. For the portfolio considered in this paper, it can be seen from Fig. 2 that VaR calculated from the conventional model is higher. Therefore, with the same multiplicative factor ( ), to yield an equal CAR value, the capital reserve needed by the ARMA(1,1)-GARCH(1,1) model is less than the conventional one. The difference in the capital reserve can be used for expanding their business, for example engaging in other investments. This shows that for VaR calculation, with the same accuracy in risk prediction, ARMA(1,1)-GARCH(1,1) model is better than the conventional one. III. CONCLUSION Financial market is a dynamic environment and full of risks. Therefore,

risk management plays a very important role for many corporate entities. One of the popular methods in managing risk is by Value-at-Risk (VaR) calculation. The advantage of using VaR is its ability to quantify a portfolio’s loss potential by mean of a single number, easily understood and used by many people. The shortcoming of VaR is the difficulties in choosing the right model. Using the wrong model will result in inaccurate results. In this paper, the use of ARMA(1,1)-GARCH(1,1) model for valuating VaR of a portfolio with two risk factors is based its ability to describe financial return’s leptokurtic distribution, volatility clustering and volatility mean reversion effects, stationarity, and also to accommodate the assumptions that its conditional mean is not constant over time. The only drawback of this model is the assumption where the white noise is normally distributed, which is hardly true for most financial data. In reality, there is a non-normality phenomenon in the return of financial data. Through the simulation, it is shown that the model is applicable to a portfolio constructed using real data. Besides, it is quite simple to calculate VaR with this model, only slightly complicated by the use of numerical computation in estimating the parameters. But, it can be solved by using MATLAB. Furthermore, from the backtesting result for the portfolio (See Figure 2), VaR from ARMA(1,1)-GARCH(1,1) model is more sensitive to the hypothetical P&L movement. This proves that with the same power and accuracy in predicting risk, the ARMA(1,1)-GARCH(1,1) model needs less capital reserve than the conventional one, thus making it a better model to be used in VaR calculation.

Page 56: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

49

ACKNOWLEDGEMENTS I would like to acknowledge and extend my heartfelt gratitude to Mr. Yosef Oktavianus Senobua, M. Sc. who has given valuable comments and made the completion of this paper possible. REFERENCES [1]. Bagasheva, BS, Fabozzi, FJ, Hsu,

JSJ, and Rachev, ST (2008). Bayesian Methods in Finance, John Wiley & Sons, Inc., New Jersey.

[2]. Basel Committee on Banking Supervision (2005). Part 2: The First Pillar – Minimum Capital Requirements. Retrieved May 31st, 2010,from http://www.bis.org/publ/bcbs118.htm

[3]. Berry, R (2009). ”Backtesting Value-At-Risk”, J.P. Morgan Investment Analytics & Consulting, pp 5-6.

[4]. Bodie, Z, Kane, A, and Marcus, AJ (2008). Investments, 7th ed., McGraw-Hill/Irwin, New York.

[5]. Brockwell, PJ, and Davis, RA (2002). Introduction to Time Series and Forecasting, 2nd ed., Springer-Science+Business Media, Inc., New York.

[6]. Craig, AT, Hogg, RV, and McKean, JW (2005). Introduction to Mathematical Statistics, 6th ed., Pearson Education, Inc., New Jersey.

[7]. Franke, J, Hafner, CM, and Hardle, WK (2008). Statistics of Financial Markets, An Introduction, 2nd ed., Springer-Verlag Berlin Heidelberg.

[8]. Jorion, P (2001). Value at Risk : The New Benchmark for Managing Financial Risk, 2nd ed., McGraw-Hill, New York.

[9]. Nieppola, O (2009). Backtesting Value-at-Risk Models, Department of Economics, Helsinki School of Economics.

[10]. Tsay, RS (2005). Analysis of Financial Time Series, 2nd ed., John Wiley & Sons, Inc., New Jersey.

Page 57: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

50

Decline Curve Analysis in a Multiwell Reservoir System using State-Space Model

S. Wahyuningsih(1), S. Darwis(2), A.Y. Gunawan(3 ), A.K. Permadi(4) (1)Statistics Study Program, Faculty of Mathematics & Natural Sciences, Mulawarman University, Samarinda,

East Kalimantan, Indonesia (Email: [email protected])

(2)Statistics Research Division, Faculty of Mathematics & Natural Sciences, Institut Teknologi Bandung, Indonesia

(Email: [email protected]) (3)Industrial and Financial Research Division, Faculty of Mathematics & Natural Sciences, Institut Teknologi

Bandung, Indonesia (Email: [email protected])

(4)Reservoir Engineering Research Division, Faculty of Petroleum Engineering, Institut Teknologi Bandung, Indonesia

(Email: [email protected]) ABSTRACT This paper presents a multi-well decline curves analysis using state-space model. A predator-prey-like model was proposed exclusively for such a system, and the parameter of which was estimated using Kalman filter. The multi-well predator-prey-like system is represented as a state-space model where the state is the model parameters and the observations are the well production. State-space model for decline curves analysis for both single and multi-well systems, has been tested on real observation as well as simulated. When the well interactions are quite significant, the use of the proposed model is more benefit. KEYWORDS Decline curve analysis, state-space model, Kalman filter, predator-prey-like model. I.INTRODUCTION Predicting oil and gas production have attracted many researchers. Fetkovich [2] developed type curve, Marhaendrajana and Blasingame [4] extended type curve method to a multiwell system, and Li and Horn [3] verified Decline Curve Analysis (DCA) method. One frequently-used technique to predict production in oil, gas and geothermal reservoirs is the DCA which is based on the empirical Arps equation [1]. The Arps DCA is described by the differential equation

1btt

dY DYdt

(1)

Here, tY is the production rate at time t , D is the decline rate, and b is the decline exponent 0 1b . Arps suggested a tentative classification of decline curve, based on their

loss ratios /t

t

YdY dt

. If the loss ratio is a

constant c where c < 0 or the first derivative of the loss ratio is near b = 0, the curve is the exponential type, 0.

DttY Y e

(2) Eq. (2) represents the relationship between production tY and t for constant loss ratio, where 0Y is the initial production rate. Non constant loss ratio corresponds to hyperbolic DCA for 0 1b , and harmonic DCA for b = 1. Wahyuningsih et.al [5] showed that the univariate Kalman filter is a good approach to predict next observation based on the Arps decline curve analysis in single well case. In many cases, production of one well is influenced by other wells in the same reservoir. So, there can be some interaction among wells. Sometimes one well can produce higher than the others. This paper aims propose a state space model as DCA model for multiwell system. We construct state-space model and derive the

Page 58: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

51

recursive formula of multiwell system in Section 2. Modeling the interaction among wells using predator-prey approach is describe in Section 3. We also give an example to illustrate the application of the model. The final section of the paper consists of some concluding remarks. II. STATE SPACE MODEL Consider a state space form for multivariate state space model (1): Measurement: 1 1 1 t t tZ Y b State: 1 1 t t tY Y a (3)

where 0

N0

t

t

a

b. Unlike other time

series model such as autoregressive process, state space model does not require the random variables to be stationary. Parameter can be estimated by ordinary least square. Using Bayesian theorem, the recursive equation from equation system (3) is

11 1 1 1 1 1

1 1 1 1

( ) e Y e

t t t t t t t t

t t t t

Y Y R R Y K

V R K R,

(4) where 1 1 1 1

ˆ ˆ( ) t t t t t te Z Z Y Y b , 1 t tR V , and

11 1 1( ) t t tK R R

is called Kalman gain. Kalman gain determines the weight given to the forecast error vector. Eqs. (4) are used to update the mean and covariance and hence the distribution of the state, 1tY , after the new observation, 1tZ has become available. III. INTERACTION MODEL Assumed that there are some interactions among production wells. Interaction among production wells is analogized as predator-prey-like model. The analogy is not pure predator-prey model because the oil and gas production

always decrease or constant, however there is not interaction among the wells. One production well will take another production wells. This well as predator, of course another well as prey. The mathematical model of the interaction among wells is proposed as :

tt

dYDY

dt

.(5)

where

1 1,2 1,2 1, 1,

2,1 2,1 2 2, 2,

,1 ,1 ,2 ,2

n n

n n

n n n n n

D s s

s D sD

s s D

, 0, , 0,1,2, , i j i j n , and

, ,

, ,

1, if as a predator1, if as a prey

i j i t

i j i t

s Y

s Y, i j ,

iD is decline rate in multiwell system, and

, i j is interaction coefficient between well

i and j . The parameters iD and , i j can be estimated simultaneously using Gauss Newton method. For example Figure 1 shows the simulation of the model for two production wells. The simulation using the propose model is generated by 1D = 0; 06 and mean 62,099, and the production well 2 is generated by 1D = 0; 01 and mean 50,337. It is assumed that there are some interactions between the wells. Each well have 20 data which is divide into training data set and validation data set. Base on the

data we have 0

44.944 5.0818.081 0.907

V

. Using

VAR process we obtain 0.854 0.139-0.018 1.013

.

The coefficient of interactions for this data are 1,2 = 0.151 and 2,1 = 0; 028. Given different

error, we have different pattern forecasting production (Figure 2,3, and 4).

Page 59: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

52

Figure 1. Simulation 1

Figure 2. Simulation 2

Figure 2. Simulation 3

Figure 4. Simulation 4

Figure 5. Kalman gain of production well 1 and 2

Figure 1, 2, 3, and 4 show the prediction and forecast of each well production using vector and scalar Kalman filter. Prediction by vector Kalman filter is more realistic than its scalar. The prediction of scalar Kalman filter tends underestimate for well 1, and overestimate for well 2. The results are similar for its forecast. The vector Kalman filter tends toward a steady

state with 1

0.591 0.0140.014 0.620

tK for 6t

(see Figure 5), and the scalar Kalman filter tends a steady state with 1,1tK = 0; 606 for

6t , and 1,2tK = 0; 616 for 6t . The implication of the steady state of Kalman gain is the forecast model can be simplified as multivariate regression model. CONCLUSIONS Decline curve analysis using the state space model can be used to represent prediction model of production well. A Predator-prey-like model is a potential approach for

Page 60: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

53

modeling inter well interaction. The proposed model is more realistic when the interaction among wells can be negligible i.e. when the wells are located relatively close to each other. ACKNOWLEDGEMENT This research was supported by Competitive Grand 2010, Directorate of Higher Education, Department of National Education, Republic of Indonesia. REFERENCES [1] Arps, JJ (1945). “Analysis of Decline

Curve Analysis”, Trans. AIME, pp 160-247.

[2] Fetkovich, MJ (1980). “Decline Curve Analysis Using Type Curve”, SPE 4629, pp 1065-1077.

[3] Li, K and Horne, RN (2005). “Verification of Decline Curve Analysis Model for Production Prediction”, SPE 93878.

[4] Marhaendrajana, T and Blasingame, TA (2001). “Decline Curve Analysis Using Type Curve Evaluation of Well Performance Behavior in a Multiwell Reservoir System”, SPE 71517.

[5] Wahyuningsih, S, Darwis, S, Gunawan, AY, Permadi, AK (2008). “Predicting Steam Production Using Kalman Filter”, FJTS, Vol.26 Issue 2, pp 219-225.

Page 61: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

54

Study of Role of Interferon-Alpha in Immunotherapy

through Mathematical Modelling

Mustafa Mamat1, Edwin Setiawan Nugraha1, Agus Kartono 2, W M Amir W Ahmad1

1Department of Mathematics, Faculty of Sciences and Technology, Universiti Malaysia Terengganu, 21030 K. Terengganu, Malaysia.

Email: [email protected] 2Laboratory for Theoretical and Computational Physics, Department of Physics, Faculty of Mathematical and

Natural Sciences, Institut Pertanian Bogor, Kampus IPB Darmaga, Bogor 16680, Indonesia. Email: [email protected]

ABSTRACT A mathematical model in the form of ordinary differential equations that describing tumor-immune interaction in the presence of immunotherapy is presented. The model is used to investigate on how the effect of immunotherapy treatment when adding interferon alpha (INF-), not only interleukin-2 (IL-2) and infiltrating lymphocytes (TIL) on the tumor growth. Numerical simulation shows combination TIL, IL-2 and INF- more effective to enhance of strength of immune system than only combination TIL and IL-2. KEYWORDS Mathematical modeling; ordinary differential equations; immune response; interferon alpha (INF-); tumor infiltrating lymphocytes (TIL); interleukin-2 (IL-2). 1. INTRODUCTION One types of promising treatment for cancer disease currently is immunotherapy. The principle of this treatment is to increase the strength of the immune system to attack cancer cells. The evidence for the potential of immune system control the cancer hats been verified in the laboratory and clinical experiment [12, 13]. It has motivated new research into development of immunotherapy [1, 3]. There are three main category of this treatment: immune response modifier, monoclonal antibodies, and vaccines [4]. The first category contains substances that affect immune response, such as interleukin-2 (IL-2), interferons, tumor necrosis factor (TNF), colony-stimulating factors (CSF), and B-cell growth factor. In the next category, monoclonal antibodies are currently being developed to target specific cancer antigens. In the third category are vaccines, which generally used therapeutically, and created from tumor cells.

Interaction between tumor cells and an immune system has been studied through mathematical model by numerous authors. In 1998, a mathematical modeling that describes dynamics between tumor cells, effector cells (the activated immune system cells) and IL-2 had been proposed by Kirschner and Panetta [10]. The important parameter in their model is antigenicity of tumor (c), measure of ability the immune system to recognize the tumor cells. Their works are able to explain both short tumor oscillations in tumor size as well as long term tumor relapse. They also investigate the effect of immunotherapy both active and adoptive on tumor growth. Then, they indicate that a treatment with adoptive immunotherapy may be a better option to either as a monotherapy or in conjuction with IL-2. Kuznetsov et al.[11] presented a mathematical model of the cytotoxic T-lymphocyte response to the growth of an immunogenic tumor. The model exhibits a number of phenomena that are seen in vivo, including the immunostimulation of the tumor growth, “sneaking through” the tumor, and the formation of a tumor “dormant state”. In 2003, Szymanska [14] proposed a basic mathematical model in the form six ordinary differential equations. The model describes immune response when cancer ells recognized and extend to include two types of immunotherapy: active and adoptive immunotherapy. Her study shows that the method of adoptive immunotherapy is seem better for the patient at least in some cases. In 2006, Pillis et al. [4] proposed the mathematical model in the form of ordinary differential equations. The model describes dynamics of tumor cells and immune cells in

Page 62: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

55

the presence of mmunotherapy and chemotherapy. There are two immunotherapy drug in their model such as interleukin-2 (IL-2) and tumor infiltrating lymphocytes (TIL). One of their results show that mixed immunotherapy and chemotherapy is more effective to eliminate tumor cells than both immunotherapy alone and chemotherapy alone. Mathematical model of immunotherapy and chemotherapy has been also proposed by Isaeva and Osivop [9]. There are two immunotherapy drug in their model such as interleukin-2 (IL-2) and interferon-alpha (INF-). In their study, they compared several of treatment strategies such as chemo/immune, immune/chemo and concurrent chemo/immune therapy. Numerical simulations show that chemo/immuno sequence is more effective while concurrent chemoimmunotherapy is more sparing. In this paper, we proposed a mathematical model for immunotherapy treatment by extending Pillis’s model [4] to include the presence of interferon-alpha (INF-) and the absence of chemotherapy. The outline of this

paper is as follow. In section 2, we present a system of ordinary differential equation which describes tumor-immune interaction under the influence of immunotherapy. In section 3, we present set parameters values for this model from previous works to run simulation. Section 4 present numerical simulations based on parameters in section 3. In section 5, we summarize and discuss our conclusion. 2. MATHEMATICAL MODEL Pillis’s model [4] describes the effect of interlukin2 (IL-2), tumor infiltrating lymphocytes (TIL), and chemotherapy on tumor growth. Isaeva and Osiopov’s model describes the effect of IL-2, INF- and chemotherapy on tumor growth. In our study, we modified Pillis’s model [4] by including INF-. We define six populations in our model: tumor cells T(t), natural killer cells N(t), CD8+ T cells L(t), circulating lymphocytes cell C(t), concentration of IL-2 I(t), INF- concentration I(t). The mathematical model follows used as follows:

TL'cDTcNT)bT1(aTdtdT

(1)

pNTNTh

TgfNeCdTdN

2

2

(2)

)t(vgi

piLIuNLT)CrNr(qLTLTDk

TDjmLdTdL

L2

2122

22

(3)

CdtdC

(4)

)(tvkTIjLIiIdtdI

I

(5)

gItV

dtdI

)(

(6)

where dTLs

TLD l

l

)/()/(

.

Page 63: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

56

Equation (1) describes the growth of tumor population. Term 1 express tumor growth, in the form logistic growth [5]. The interaction between NK cells with tumor are represented by term of cNT . Whereas tumor lysis by CD8+ T cells has form DT . Both that terms represents negative interaction between two populations which describes completion for space and nutrients as well as regulatory action and direct cell population. We suppose that the parameter c’ depend on INF- concentration

as

0I

I

CTL e2c'c

where CTLc is rate

of tumor inactivation by CTL [7]. This agrees with the fact that INF- enhances immune immune-mediated antitumor responses by increasing expression of MHC molecules on tumor cells. The second equation describes the rate of change for the NK cell population. In this model, the growth term of NK cell population is tied to the overall immune health levels as measured by the population of circulating lymphocytes. Hence, the growth term of NK cell is represented by term 1 and term 2,

fNeC . The term of NK cell recruitment describe by Pillis and Radunskaya [5]. Hence,

the term is NTh

Tg 2

2

. Inactivation term of

cytolytic potential occur when a NK cell has interacted with tumor cells several times and ceases to be effective. The inactivation term of NK cells has form pNT [5]. The rate of change for the CD8+ T cell population is described by equation (3). Cell growth term for CD8+ T cells consist only of natural death rate, since no CD8+ T cells are assumed to be present in the absence of tumor cells. Thus, the term is mL [4]. The term of NK cell recruitment describe by Pillis and

Radunskaya [5], and has form LTDk

TDj 22

22

.

For term 3, we use inactivation term developed by also Pillis and Radunskaya [5] which written in the form qLT . CD8+ T-cells can also be recruited by the debris from tumor cells lysed by NK cells. This recruitment term is proportional to the number of cells killed. From this, we procure the

term NTrLNRL 1, [4]. The immune system is also stimulated by the tumor cells to produce more CD8+ T-cells. Recognition of the presence of the tumor is proportional to the average number of encounters between circulating lymphocytes and tumor. It is represented by CTrTCRL 2, [4]. The fifth

term, 2uNLICL , describes the NK cell regulation of CD8+ T cells, which occurs when there are very high levels of activated CD8+ T cells without responsiveness to cytokines present in the system [4]. The term of )t(vL is a function of time and represents drug intervention term of tumor infiltrating lymphocytes to boost the immune system [4]. Equation (4) describes the rate of change for the circulating lymphocyte population. We assume that circulating lymphocytes assumed generate a constant rate and that each cell has a natural lifespan. Thus, the terms are C [4]. The fifth equation describes the concentration of IL-2 in the bloodstream. The term 1, iI , express the natural death rate of IL-2 [4]. The next term, jLI , represents the consumption rate of IL-2 [7]. It was found that inhibition of IL-2 results from an accumulation of immune-suppressing substances, prostaglandins. Their number is proportional to the tumor population. Prostaglandins suppress of the production of IL-2 and can directly destroy it molecules [7]. The term 3, kTI , express the IL-2 destruction rate by prostaglandins [11]. The last term, )t(vI , is function of time and represent the amount of IL-2 injected to the patient [4]. The sixth equation shows the concentration of INF- in the bloodstream. The term 1, )t(V , is function of time and represents the amount of INF- injected to the patient [7]. The term 2, gI , express the natural death rate of INF- [7].

Page 64: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

57

Table1. Estimated human parameter values for numerical analysis

Patient 9 Patient10 Units Description Source a = 4.31 x 10-1 a = 4.31 x 10-1 day-1 Tumor growth rate [6]

b =1.02 x 10-9 b =1.02 x 10-9 cell-1 1/b is tumor carrying capacity [6]

c = 6.41 x 10-11 c =6.41 x 10-11 day-1 · cell-1 Fractional (non) ligand transduced tumor cell kill by NK cells

[6], [7]

d =2.34 d =1.88 day-1 Saturation level of fractional tumor cell kill by CD8+ T Cells. Primed with ligand-transduced cells, challenged with ligand-transduced

[7]

e = 2.08 x 10-7 e = 2.08 x 10-7 day-1 Fraction of circulating lymphocytes that became NK cells

[11]

l = 2.09 l = 1.81 Dimensionless Exponent of fractional tumor cell kill by CD8+ T cells. Fractional tumor cell kill by chemotherapy

[7]

f = 4.12 x 10-2 f = 4.12 x 10-2 day-1 Date rate of NK cells [6]

g = 1.25 x 10-2 g = 1.25 x 10-2 day-1 Maximum NK cells recruitment by ligand-transduced tumor cells

[7]

h = 2.02 x 107 h = 2.02 x 107 cell2 Steepness coefficient of the NK cell recruitment curve

[11]

j = 2.49 x 107 j = 2.49 x 107 day-1 Maximum CD8+ T cell recruitment rate. Primed with ligand-transduced cells

[6], [7]

k = 3.66 x 107 k = 5.66 x 107 cell2 Steepness coefficient of the CD8+ T cell recruitment curve

[6], [7]

m = 2.04 x 10-1 m = 9.12 day-1 Death rate of CD8+ T cells [15]

q = 1.24 x 10-6 q = 1.24 x 10-6 day-1 · cell-1 CD8+ T cell inactivation rate by tumor cells [11]

p = 3.42 x 10-6 p = 3.59 x 10-6 day-1 · cell-1 NK cell inactivation rate by tumor cells [7]

s = 8.39 x 10-2 s = 5.12 x 10-1 Dimensionless Steepness coefficient of tumor –(CD8+ T cell) lysis term D. Primed with ligand-transduced cells, challenged with ligand-transduced.

[6]

r1 = 1.10 x 10-7 r1 = 1.10 x 10-7 day-1 · cell-1 Rate of which CD8+ T cells are stimulated to be produced as a result a tumor cells killed by NK cells

[15]

r2 = 6.50 x 10-11 r2 = 6.50 x 10-11 cell−1 · day−1

Rate of which CD8+ T cells are stimulated to be produced as a result a tumor cells interaction with circulating lymphocytes

-

u = 3.00 x 10-10 u = 3.00 x 10-10 cell−2 · day−1 Regulatory function by NK cells of CD8+ T cells

-

= 7.50 x 108 = 5.00 x 108 cell · day−1 Constant source of circulating lymphocytes [8]

= 1.20 x 10-2 = 8.00 x 10-3 day−1

Natural death and differentiation of circulating lymphocytes

[8]

= 9.00 x 10-1 = 9.00 x 10-1 day−1 Rate of chemotherapy drug decay [2] pi = 1.25 x 10-1 pi = 1.25 x 10-1 day−1 Maximum CD8+ T cell recruitment curve by

IL-2 [10]

gi = 2.00 x 102 gi = 2.00 x 102 cells2 Steepness of CD8+ T cell recruitment curve by IL-2

-

i = 1.00 x 101 i = 1.00 x 101 day−1 Rate of IL-2 drug decay [10]

cCTL =4.4 x 10-9 cell−1day−1 Rate of tumor cells inactivation by CD8+ T cells

[9]

j =3.3 x 10-9 cell−1day−1 Rate consumption IL-2 by CD8+ T cells [9]

k = 1.8 x 10-8 cell−1day−1 Inactivation of IL-2 molecules by prostaglandins

[9]

Page 65: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

58

3. PARAMETER VALUES To complete the development of the mathematical model and simulation, it is necessary to obtain accurate parameters. The equations (1)-(6) are very sensitive to the choice of parameters and can vary greatly from one individual to another, so multiple data sets can be used in order to obtain suitable parameters ranges. In our study, we used value of parameters from previous works. These parameters values are provided at Table 1. 4. NUMERICAL SIMULATION Firstly, we examine the model with the set of parameters representing patient 9 provided at Table 1. We present a case which the immune system response to tumor growth in the presence of INF-. It is to investigate the strength of the immune system in controlling tumor growth after patient got immunotherapy treatment by addition of INF-. To study behavior model with variation of patient, we simulate this model using the parameters of the patient 10 provided at Table 1. This patient comparison provides insight into patient-specific parameter sensitivity.

4.1. ADDITION OF INF- on PATIENT 9 In this sub section, we examine immunotherapy treatment with injection of IL-2, TIL and INF- to patient. We are interested to investigate whether the INF- can cause immune system to be stronger. We begin with examine the immunotherapy treatment without INF- to handle 1 x 106 tumor cells, while the initial of the immune system are 1 x 103 NK cells, 10 CD8+ T cells, and 6 x 108 circulating lymphocytes. 109 TIL administered on day 7 to day 8 and IL-2 is administered in 6 pulses at strength 5 x 106 on day 8 to day 11. The results are shown in Figure 1A and 1B. These Figure show that this treatment successfully increase the strength of the immune system and consequently the immune system is able to kill the tumor cells completely at day 16. By addition of INF- in immunotherapy treatment, I (t) = 5 MU administered for four days in a ten day cycle, the result show that this treatment allow the immune system become stronger to against the tumor cells as seen Figure 2A and 2B, consequently the tumor cells is able to killed completely at day 11. All these results demonstrate that the tumor cells will die more quickly when in

presence of the INF- in immunotherapy treatment. Then we note that immunotherapy by using TIL, IL-2 and INF- may limited only effective for tumor sizes around order 106. Figure 3A and 3B shows that this treatment is not effective in treating the tumor size 1 × 107. These figure show that the addition of INF- has no significant effect in controlling the tumor growth. This result is consistently with previous work [4].

4.2. ADDITION OF INF- on PATIENT 10 In order to examine whether these treatment simulations vary from patient to patient, we change patient specific parameter extracted from Rosenberg’s study and run simulation with the parameters for patient 10 [6]. These parameters are provided at Table 1. We begin examine the initial tumor cells 1 x 106. The result of numerical experiments show that the immune system can reduction of the tumor cells before day 20 but cannot full of control the tumor cell population as we see in Figure 5A, so that the tumor growth to dangerous level, although patient 10 get 109 TIL from day 7 through day 8 and IL-2 in 6 pulses at strength 5 × 105 from day 8 to day 11. If we compare this result with previous numerical experiment in patient 9 in Figure 1A, where tumor cells can be killed, thus the immune system in patient 10 is weaker than patient 9. Then, we study the effect of the addition of INF- on dynamics of tumor cells and immune cells. Our result shows that this treatment cause the immune system is more strength than before and the tumor cells can be killed completely at day 17 as we seen in Figure 5B. However it begin relapse again at day 23.

Page 66: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

59

Figure 1. Dynamics of tumor cells and immune cells are under influence of immunotherapy without addition of INF- when

initial tumor size 1 × 106 cells. Figure B is more detail than Figure A.

Figure 2. Dynamics of tumor cells and immune cells are under influence of immunotherapy with addition of INF- when

initial tumor size 1 × 106 cells. Figure A is more detail than Figure B.

A B

A B

Page 67: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

60

Figure 3. Dynamics of tumor cells and immune cells are under influence of immunotherapy when initial tumor size 1 × 107

tumor cells. A: immunotherapy without addition of INF-. B: immunotherapy with addition of INF-.

Figure 4. Immunotherapy drugs concentration. A: IL-2 Concentration. B: INF- Concentration

A B

A B

Page 68: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

61

Figure 5. Dynamics of tumor cells and immune cells are under influence of immunotherapy when initial tumor size 1 × 106

cells. A: immunotherapy without addition of INF-. B: immunotherapy with addition of INF-. 5.DISCUSSION and CONCLUSIONS In this paper, we have extended Pillis’s model [4] to include the presence of INF-. The model represented by system of ordinary differential equation which describes dynamics of tumor cells and immune cells under the immunotherapy treatment. This system consists vL(t), vI (t) and v(t) that represents the injections of TIL, IL-2 and INF- into system, respectively. However, the effects of INF- treatment on tumor-immune interaction are the main focus. Our numerical experiments show that the addition of INF- in immunotherapy treatment plays a significant role in dynamics of tumor cells and immune cells. Based on Figure 1 and 2, INF- treatments can enhance the immune system and consequently the tumor cells dies more quickly. There fore, the immunotherapy treatment with combination of TIL, IL-2 and INF- is more effective than only combination TIL and IL-2 without INF- . For parameters from patient 10 and initial tumor size around 106, our study show effect of injection of TIL, IL-2 and INF- cause reduction of the tumor cells temporary, then tumor recurrence in the future. If initial tumor cells reach 107, this treatment have no an important role on the immune system since the tumor cells is unable killed.

In the future, this study is still necessary to further investigation how the influence of other cytokines such as IL-10, IL-12 on the strength of the immune system to find better treatment. ACKNOWLEDGEMENTS The authors acknowledge the financial support from Department of Higher Education, Ministry of Higher Education, Malaysia through Fundamental Research Grant Scheme (FRGS) Vot 59146. REFERENCES [1]. Blattman, J.N. and Greenberg, P.D.,

(2004). Cancer immunotherapy: A treatment for the masses. Science, 305:200–205.

[2]. Calabresi, P. and Schein, P.S., (1993). editors. Medical Oncology: Basic Principles and Clinical Management of Cancer. McGraw-Hill, New York, second edition,.

[3]. Couzin, J. (2002). Select T cells, given space, shrink tumors. Science, 297:1973.

[4]. de Pillis, L. G., Gu, W. and Radunskaya, A. E., (2006). Mixed immunotherapy and chemotherapy of tumors: modeling, applications and

A B

Page 69: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

62

biological interpretations, J. Theoret Biol 238(4): 841-862.

[5]. de Pillis, L.G. and Radunskaya, A.E. (2003). Immune response to tumor invasion. In K.J. Bathe, editor, Computational Fluid and Solid Mechanics, 2: 1661–1668, M.I.T.

[6]. Diefenbach, A., Jensen, E. R., Jamieson, A. M. and Raulet, D., (2001). Rael and H60 ligands of the NKG2D receptor stimulate tumor immunity, Nature 413: 165-171.

[7]. Dudley, M. E. , Wunderlich, J. R., Robbins, P. F., Yang, J. C., Hwu, P., Schwartzentruber, D. J., Topalian, S. L., Sherry, R., Restifo, N. P., Hubicki, A. M., Robinson, M. R., Raffeld, M., Duray, P. Seipp, C. A., Rogers-Freezer, L., Morton, K. E., Mavroukakis, S. A., White, D. E., and Rosenberg, S. A., (2002). Cancer regression and autoimmunity in patients after clonally repopulation with anti tumor lymphocytes, Science 298 (5594):850-854.

[8]. Hauser, B. (2001). Blood tests. Technical report, International Waldenstrom’s Macroglobulinemia Foundation. Available at http://www.iwmf.com/Blood Tests.pdf Accessed May 2005.

[9]. Isaeva, O. G. and Osiopov, V. A., (2009). Different strategies for cancer

treatment: Mathematical modelling, Computational and Mathematical Methods in Medicine 10(4): 253-272.

[10]. Kirschner, D. and Panetta, J. C., (1998). Modeling immunotherapy of the tumor-immune interaction, J. Math. Biol. 37(3): 235-252.

[11]. Kuznetsov, V., Makalkin, I., Taylor, M., and Perelson, A., (1994). Nonlinear dynamics of immunogenic tumors: Parameter estimation and global bifurcation analysis. Bulletin of Mathematical Biology, 56(2):295–321.

[12]. O’Byrne, K. J., Dalgleish, A. G., Browning, M. J., Steward, W. P., and Harris. A. L., (2000). The relationship between angiogenesis and the immune response in carcinogenesis and the progression of malignant disease. Eur J Cancer., 36:151–169.

[13]. Stewart, T.H., (1996). Immune mechanisms and tumor dormancy. Medicina – Buenos Aire, 56(1):74–82.

[14]. Szyma´nska, Z.(2003). Analysis of immunotherapy models in the context of cancer dynamics, International Journal of Applied Mathematics and Computer Science 13(3): 407–418.

[15]. Yates, A., and Callard. R., (2002). Cell death and the maintenance of immunological memory. Discret Contin Dyn S., 1(1):43–59.

Page 70: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

63

Improving the performance of the Helmbold universal portfolio with an unbounded learning parameter

Choon Peng Tan(1) and Wei Xiang Lim(2)

Department of Mathematical and Actuarial Sciences, Universiti Tunku Abdul Rahman, Jalan Genting Kelang, 53300 Setapak, Kuala Lumpur, Malaysia

(1)Email: [email protected]

(2)Email: [email protected]

ABSTRACT Universal portfolios were studied in [1] and [2]. We consider investing in an -stock market using a Helmbold universal portfolio [3]. It was demonstrated in [3] that the Helmbold universal portfolio can perform better than the Cover uniform universal portfolio [1] on some stock data sets with much lesser computer memory requirements. It is the objective of this paper to show that the upper bound on the learning parameter recommended in [3] is unnecessarily restrictive. By allowing to take on larger positive or negative values, it may be possible to achieve higher investment returns. KEYWORDS Helmbold universal portfolio; investment wealth; learning parameter. I. INTRODUCTION An investment portfolio is universal if it does not depend on the underlying distribution of the stock prices. Universal portfolios were studied by Cover [1], Cover and Ordentlich [2]. We consider investing in an -stock market using a Helmbold universal portfolio [3]. It was demonstrated in [3] that the Helmbold universal portfolio can perform better than the Cover uniform universal portfolio [1] on some stock data sets with much lesser computer memory requirements. Subsequently, Tan and Tang [4] have shown that the initial starting portfolio of the Helmbold universal portfolio is an important parameter influencing its performance. A portfolio vector is a vector satisfying for and

. The Helmbold universal portfolio is a sequence of portfolio vectors generated by the following update of , the ith component of the portfolio vector on the nth trading day:

(1)

where is the number of stocks in the market, is the price relative of stock on the nth

trading day; ; the learning parameter is defined by

where is the total number of trading days and . From the

definition, it is clear that is positive and bounded. We assume all logarithms in this paper are to base . It is our objective to show that the upper bound on recommended in [3] is unnecessarily restrictive. By allowing to take on larger values, it may be possible to achieve higher investment returns. By maximizing a certain objective function of the doubling rate of the wealth or capital function, we show that the resulting universal portfolio obtained is given by (1) with negative. We demonstrate further that by running the Helmbold universal portfolio on some selected stock data from the Kuala Lumpur Stock Exchange, it is possible to obtain higher investment returns using positive and negative learning rates which are unrestricted. II. ETA-PARAMETRIC FAMILY OF HELMBOLD UNIVERSAL PORTFOLIOS First, we introduce the eta-parametric family of Helmbold universal portfolios which is defined by (1) for any real number . Preposition. Consider the objective functions

(2

and

Page 71: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

64

(3

where

(4)

is the Kullback-Leibler distance measure or relative entropy and is positive. By approximating by

, the maximum of the objective function is achieved at given by (1) and the maximum of is also achieved at given by (1) where is replaced by . Proof. Since is a portfolio vector for , we need to intoduce the Lagrange multiplier in maximizing the objective functions

(

and

(

Helmbold et al. [3] have shown that the maximum of is achieved at given by (1). The maximum of is achieved when the following partial derivatives are zero,

for . We obtain for

. Summing up the components over , we have

leading

to for .

Remark. We have shown that the eta-parametric family of Helmbold universal portfolios is generated by maximizing two different objective functions and for and

respectively. In the first case, we want the current portfolio to be close to the previous portfolio in terms of Kullback-Leibler distance measure. In the second case, we want the opposite, i.e. the maximum separation between the current and the previous portfolios. In Tan and Tang [4], we have shown that the initial starting portfolio is a parameter that can affect the final wealth achievable by the Helmbold universal portfolio. If the initial starting portfolio is a good one, we require that the subsequent portfolios are close to each other. On the other hand, if the initial starting portfolio is not a good one, we hope to move away from the current portfolio to the right one with the highest investment return. In the latter case, we require maximum separation between the current and previous portfolios. III. EXPERIMENTAL RESULTS We have run the eta-parametric family of Helmbold universal portfolio on 3 stock data sets chosen from the Kuala Lumpur Stock Exchange. The period of trading of the stocks selected is from January 1, 2003 until November 30, 2004, consisting of 500 trading days. Each data set consists of 3 company stocks. Set A consists of the stocks of Malayan Banking, Genting and Amway (M) Holdings. Set B consists of the stocks of Public Bank, Sunrise and YTL Corporation. Finally, set C consists of the stocks of Hong Leong Bank, RHB Capital, and YTL Corporation. The universal wealth achieved by the universal portfolio after trading days, is given by:

(7)

where is the price-relative vector on day and the initial wealth is assumed to be 1 unit. We begin with the initial portfolio

for all the 3 data sets. For each data set, the portfolios after 500 trading days and the universal wealths achieved are calculated for selected values of

and are listed in Tables 1, 2 and 3. Let and denote the norm and norm of the vector

respectively. It is clear from Tables 1, 2 and 3 that and as

Page 72: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

65

functions of are growing with as gets larger. If is restricted between 0 and

(or 0.1326 for , ) as recommended in [3], and

are close to 0. If a larger variation of from is required, it can be achieved by using a larger universal portfolio. In Figures 1, 2 and 3, the graphs of the universal wealth against are plotted for the data sets A, B and C respectively, where the local maxima are shown. We strongly believe that the local maxima are also the global maxima over all . For data set A, the maximum universal wealth achievable is

at . Here is an example of a Helmbold universal portfolio with a negative-valued parameter achieving the maximum wealth. For data sets B and C, the maximum universal wealth achievable are

and at and respectively. Again, this demonstrates that if is restricted between 0 and 0.1326 as recommended in [3], it is not possible to achieve the maximum wealth. IV. CONCLUSIONS This paper demonstrates that it is necessary to remove the unnecessary restriction

imposed on in order to

achieve a higher investment wealth. Empirical evidence is provided that the maximum investment wealth can be achieved at a negative learning parameter and at large positive learning parameters. The best achieving the maximum wealth can be determined from hindsight given the past stock data. It remains unsolved how to choose the best at the beginning of the investment period. Table 1. The portfolios after 500 trading days and

the universal wealths achieved for selected values of and data set A

-10 (0.005, 0.979, 0.016) 1.4309 -5 (0.068, 0.814, 0.118) 1.5449 -3 (0.151, 0.638, 0.211) 1.5724

-1.00 (0.270, 0.428, 0.302) 1.5722 -0.75 (0.286, 0.403, 0.311) 1.5706 -0.50 (0.302, 0.379, 0.319) 1.5689 -0.30 (0.314, 0.361, 0.325) 1.5674 -0.20 (0.321, 0.351, 0.328) 1.5666

-0.10 (0.327, 0.342, 0.331) 1.5658 0 (0.333, 0.333, 0.333) 1.5650

0.10 (0.340, 0.325, 0.336) 1.56410.20 (0.346, 0.316, 0.338) 1.5633 0.30 (0.352, 0.307, 0.340) 1.5624 0.50 (0.364, 0.291, 0.345) 1.5607 0.75 (0.380, 0.271, 0.349) 1.5585 1.00 (0.395, 0.253, 0.353) 1.5563

3 (0.504, 0.137, 0.359) 1.53985 (0.594, 0.070, 0.336) 1.526610 (0.748, 0.011, 0.240) 1.4996

Table 2. The portfolios after 500 trading days and

the universal wealths achieved for selected values of and data set B

-10 (0.851, 0.149, 0.000) 1.8141 -5 (0.687, 0.311, 0.002) 1.8109 -3 (0.601, 0.381, 0.018) 1.8399

-1.00 (0.460, 0.397, 0.144) 1.9865 -0.75 (0.432, 0.387, 0.181) 2.0221 -0.50 (0.402, 0.373, 0.225) 2.0629 -0.30 (0.375, 0.359, 0.265) 2.0993-0.20 (0.362, 0.351, 0.287) 2.1187 -0.10 (0.348, 0.343, 0.310) 2.1389

0 (0.333, 0.333, 0.333) 2.1599 0.10 (0.319, 0.324, 0.358) 2.1816 0.20 (0.304, 0.313, 0.383) 2.2041 0.30 (0.289, 0.303, 0.408) 2.2272 0.50 (0.260, 0.280, 0.461) 2.2752 0.75 (0.223, 0.250, 0.527) 2.3381 1.00 (0.189, 0.220, 0.591) 2.4030

3 (0.032, 0.052, 0.917) 2.8850 5 (0.004, 0.009, 0.987) 3.1897 10 (0.000, 0.000, 1.000) 3.5139

Table 3. The portfolios after 500 trading days and

the universal wealths achieved for selected values of and data set C

-10 (0.023, 0.977, 0.000) 1.3126 -5 (0.138, 0.862, 0.000) 1.3349 -3 (0.252, 0.741, 0.008) 1.3898

-1.00 (0.366, 0.519, 0.115) 1.5867 -0.75 (0.368, 0.478, 0.155) 1.6358 -0.50 (0.363, 0.432, 0.204) 1.6931 -0.30 (0.355, 0.394, 0.251) 1.7453 -0.20 (0.349, 0.374, 0.277) 1.7735 -0.10 (0.342, 0.354, 0.305) 1.8031

0 (0.333, 0.333, 0.333) 1.8341 0.10 (0.324, 0.313, 0.363) 1.8664 0.20 (0.313, 0.293, 0.394) 1.9000 0.30 (0.302, 0.272, 0.426) 1.9348

Page 73: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

66

0.50 (0.277, 0.233, 0.490) 2.0075 0.75 (0.243, 0.188, 0.569) 2.1030 1.00 (0.208, 0.147, 0.645) 2.2015

3 (0.034, 0.012, 0.954) 2.8734 5 (0.004, 0.001, 0.995) 3.2020 10 (0.000, 0.000, 1.000) 3.4415

Figure 1. Graph of against displaying the local maximum at for data set A

Figure 2. Graph of against displaying the local maximum at for data set B

Figure 3. Graph of against displaying the local maximum at for data set C REFERENCES [1]. Cover, TM (1991). "Universal portfolios,"

Math. Finance, Vol 1, pp 1-29. [2]. Cover, TM, and Ordentlich, E (1996).

"Universal portfolios with side information," IEEE Trans. Inform. Theory, Vol 42, pp 348-363.

[3]. Helmbold, DP, Schapire, RE, Singer, Y, and Warmuth, MK (1998). "On-line portfolio selection using multiplicative updates" Math. Finance, Vol 8, pp 325347.

[4]. Tan, CP, and Tang, SF (2003). "A comparison of two types of universal portfolios based on some stock-price data," Malaysian J. of Science, Vol 22, pp 127-133.

Page 74: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

67

Optimal Design The Interval Type-2 Fuzzy PI+PD Controller And Superconducting Energy Magnetic Storage (SMES) For Load Frequency

Control Optimization On Two Area Power System

Muh Budi R Widodo(1), M Agus Pangestu H.W(2)

(1)Electrical engineering of ITS graduate, Surabaya, East java, Indonesia Email: [email protected]

(2)Merine Engineering of ITS graduate,Surabaya, East java, Indonesia Email: [email protected]

ABSTRACT This paper represents the application of supercon-ducting magnetic energy storage (SMES) and interval Type-2 interval fuzzy PI+PD (IT2FPI+PD) controller to improve the performance of power system frequency are two areas that were changed due to changes in the load of the system. The results of this study is the result of simulation studies using MATLAB. Through this simulation is performed to observe the performance frequency response of the system in two areas electricity in which use the application of fuzzy PI+PD controller, IT2FPI controller, and combination IT2FPI+PD with SMES. From the simulation results obtained by the fact that the system uses a combination IT2FPI and SMES have the best performance. It can be seen from the overshoot and settling time of the frequency of a system in their respective areas of -0.000395 p.u and 2.667 seconds, in two areas each -0.0001091 p.u and 6.72 seconds. While the power transfer between areas for each mising has overshoot and settling time -0.0001925 p.u and 2.139 seconds. KEYWORDS Two Area Power Sistem; Load Frequency Control; Interval type-2 Fuzzy PI+PD controller; Supercon-ducting Magnetic Storage (SMES). I. INTRODUCTION

In the power system frequency is a very important parameter and is representative of active power system [1.5]. Good frequency settings can ensure constant rotation for a load of synchronous machines and induction machines [1]. This constant rotation is very important to obtain the desired system perfor-mance [1,2]. In large-scale interconnection systems both large and small generators connected to one (interconnection), all synchronous machines work so generetor must operate at the same frequency [1,3,5]. The number of generators that supply the load

requires more than one division of power supplied to the load, so there is no waste of energy [3,4,5]. The problem that arises is the burden that is always growing all the time, the ability of plants to respond in terms of increase or decrease output rapidly and accurately from zero to full load and becomes zero again is very important to be considered [1,3,7]. If the frequency changes and hold left until you reach 10% then the generator will be out of sync, so it will disrupt the stability of electric power system, so we need a reliable frequency settings [5.12]. Frekeuensi settings and load in electric power system is known as Load Frequency Control (LFC) [1,3,5,6].

The main objective of the LFC in power systems: maintaining the system frequency changes in the allowable limit and minimize the power transfer power flow on tie-line [5.8]. To make the adjustment burden and frequency, the two variables on the frequency and tie-line power exchange and weighed together by a linear combination to form a single variable called Area Control Error (ACE) [1,2,3]. ACE is used as the control signal input in setting the load and frequency [1,3,5,6]. At ACE settings conventional PI control and PD control are the main choice among industry [1.9]. usage an easy and uncomplicated to make this controller has a fascination [1.10]. Combined these two kotroler possible to make the ACE to zero, which means the system has stabilized [1,5,10]. But konrtoler PI and PD has weakness in determining the gain is amplified, so often this control method digabungkandengan other methods such as fuzzy logic. Fuzzy Logic, or fuzzy logic has been known as a reliable and proven method to improve the performance of fuzzy systems that use simple and requires no mathematical equation to solve the problem of fuzzy logic

Page 75: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

68

makes it very at favorite among the researchers. Along with many improvements perkembangnya fuzzy experience. In 1965 A lotfi Zadeh Interval Type-2 Fuzzy Logic, an upgrade of Fuzzy Logic. Interval Type-2 Fuzzy improve the definition of antacedent and consequent weakness of the fuzzy type-1 with dual functions namely Lower mebership membership function (LMF) and upper mebership function (UMF) [1.11] Use of fuzzy controllers in electric power systems are very popular among researchers, but to improve the performance of this controller, the electric power system was also installed superconducting Magnetic Storage (SMEs). These SMES are the direct current equipment is capable of storing energy in the form of the magnetic field. SMES have the advantage that is able to provide active power and reactive power at the same time on power system with 98% efficiency. This is very helpful in supplying energy when interference occurs in the form of changes in load on the system, so the frequency of the system can be kept constant [12.13]. The paper used the fuzzy PI + PD controller type-2 at two area power system and superconducting Magnetic Storage (SMES) to improve the system performance decreases due to changes in load through simulation using MATLAB. With the application controller and the SMES in the two area power system is expected to increase the performance of the system. II. LOAD FREQUENCY CONTROL (LFC)

In frequency control (LFC) have two physi-cal quantity measured were frequency and tie line load. Two magnitude in comparison with the reference scale and the difference is used to determine the corrective measures carried out by adding or reducing power generated [2,3,4]. Linear model of power system shown by figure 2.

22

1DsM

11

1DsM T/s

111

CHsT 211

CHsT

111

gsT 211

gsT

2

1

GR1

1

GR

+

+

+

+

+ +

--

- --

-

1Pc2Pc

1Y 2Y

2Pm1Pm

1LP 2LP

tieP

1 2

1f 2f

figure 1. Two area power system [2]

III. INTERVAL TYPE-2 FUZZY PI+PD CONTROLLER, SUPER CONDUCTING MAGNETIC STORAGE (SMES) In the control frequency, the controller has

an important role is to maintain the system frequency. To assist in maintaining the stability of the controller system installed SMES units which hace function as batrai. At 3.1 and 3.2 points will be discussed on modeling both the equipment..

3.1 Design of PIPD Controller [5,6]

Transfer function of the PI controller in the form of the z-transform equation is given by,

K KiPu (n ) = (e(n ) - e(n - )) + e(n )PI

T T T T TT T

(1) u (n ) = u (n - ) + K u (n )PI PIuPiPI

T T T T

(2) therefore Transfer function of the PD controller in the form of the Z-transform equation is given by,

K KP Du (n ) = e(n )-e(n - ) + e(n )-e(n - )TPDT T T T T T TT

(3)

u (n ) = -u (n - ) + K u (n )PD uPd PDPDT T T T

(4)

Where, Kp, Ki and Kd is gain of proportional, integral and differential, Upi and Upd is output of Fuzzy PIPD controller, and nT is time variable. Consider of equation 1 to 4

Page 76: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

69

obtained block diagram of the fuzzy PIPD controller that shown in Figure 2.

Figure 2. IT2FPI+PD controller

3.2 Membership function [6]

Input membership functions of fuzzy PI + PD is indicated by Figure 3, namely in the form of error (ep) and the delta error (ev).

Figure 3, Membership Function Input

the membership functions od output shown in figure 4, which formed as a single output.

figure 4. membership fungtion of output

Rule base from input from the membership function of input and output above is shown in Table 1.

Table 1. Rule Base Fuzzy Type-2

Delta error (ev)

Error (ep)

Rule evn evp epn on zo epp zo op

3.3 Superconducting Magnetic Storage (SMES) [12,13]

SMES is unidirectional flow devices which store energy in a magnetic field [12]. Of the several kinds of energy storage technologies such as Compressed Air Energy Storage (CAES), Battery

energy storage system (BESS), SMES have few advantages that are able to provide active power and reactive power generating system simultaneously with the efficiency reaches 98% [12.13]. Fast res-ponse to load changes to make SMES are very interested. Basically the SMES units consist of superconducting coil, cryogenic cooling systems, and power Conversion System (PCS).

SMES unit consists of the DC superconducting coil and the converter is connected to the transformer Y-Δ/YY. Initial charging coil using a first flow Ido, ignoring losses in the converter, the inductor DC voltage equation can be expressed,

d d0 d cE 2V c 2I Ros -

(5)

when there are changes in the system load current and voltage changes, which are stated inductor,

1E i s K B s P sd oi i i i1 sTdci

( ) f ( ) ( )

1K I sIdi di1 sTdci

( )

(6) 1

E sdisLiI sdi ( )( )

(7)

SMES active power changes from the inductor can be expressed by,

SMi I s I si do di iE Ed dP s ( ) ( )( )

(8)

from equations 6, 7 and 8, it can be made block diagram of a unit of SMEs as shown in Figure 5,

Figure 5. Block diagram of SMES

IV. SIMULATION RESULT

Page 77: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

70

The simulation using MATLAB 7.3, by observing the response frequency stability and power transfer between areas (Ptie). The system was given the disruption in one area such as load change of 0.1 pu. simulation results shown in Figure 6, 7 and 8.

0 2 4 6 8 10 12 14 16 18 20-12

-10

-8

-6

-4

-2

0

2x 10

-3

respon frekuensi area 1(pu)

Waktu (detik)

a

mpl

itude

(pu

)

X: 1.788Y: -0.0001298

1. Fuzzy PI Controller2. IT1FPI+PD Controller3. IT2FPI+PD Controller4. IT2FPI+PD SMES

1

2

3 4

Figure 6. frequency response on 1-th area

0 2 4 6 8 10 12 14 16 18 20-7

-6

-5

-4

-3

-2

-1

0

1x 10

-3

respon frekuensi area 2(pu)

Waktu (detik)

a

mpl

itude

(pu

)

X: 3.202Y: -9.953e-005

1

2

3

4

1. Fuzzy PI Controller2. IT1FPI+PD Controller3. IT2FPI+PD Controller4. IT2FPI+PD SMES

Figure 7, frequency renspon 2-nd area

0 2 4 6 8 10 12 14 16 18 20-0.014

-0.012

-0.01

-0.008

-0.006

-0.004

-0.002

0

respon P-tie two area power system (pu)

Waktu (detik)

1. Fuzzy PI Controller2. IT1FPI+PD Controller3. IT2FPI+PD Controller4. IT2FPI+PD SMES

1

2

3

4

Figure 8. respon of tie line power between 1-th

and 2-nd area

Figure 6 represents a frequency response in first area. That figure whoen that the frequency response of two area power systems by using IT2FPI + PD combined with SMAS has the best response, settling time and overshoot frequency of each system is -0.000395 p.u and 2.667 seconds. Response in

second area and tie line power transfer between areas also show similar things, it can be seen in Figure 7 and 8. For more details about the frequency response of area 1, 2 and Ptie can be seen in Table 2 and Table 3.

Table 2. Frequency Response of 1-th and 2-nd area

Kontroler Overshot (p.u) Time Settling (second) 1-th area 2-nd area 1-th area 2-nd area

Fuzzy PI -0.1167 -0.00688 16.78 17.19 IT1FPI+PD -0.01064 -0.00583 13.96 15.34 IT2FPI+PD -0.008102 -0.00397 11.96 13.55 IT2FPI+PD

SMES -0.000395 -0.0001091 2.669 6.72

Table 3. P-tie Response on Two Area Power System

Kontroler Ptie Overshoot ts (second)

Fuzzy PI -0.0138 >20 IT1FPI+PD -0.01119 18.88 IT2FPI+PD -0.0071 16.12 IT2FPI+PD SMES -0.0001925 2.139

V. CONCLUSIONS

Applications of combination IT2FPI + PD and SMES can increase the frequency of the system performance significantly. This can be seen from the overshoot and settling time of system frequency response and tie line power transfer as shown by Table 2 and Table 3. ACKNOWLEDGEMENTS

Thaks you very much we appreciate to PSOC laboratory members and ITS, which has provided facilities so that research papers can be resolved in time. REFERENCES [1] Imam Robandi,” Desain Sistem Tenaga

Modern, Penerbit ANDI, Yogyakarta, 2006.

[2] Hadi Saadat, Power System Analysis 2nd Edition, McGrowHill. 2004.

[3] Kundur,.P, Power System Stability and Control, McGraw-Hill,Inc.,1994

[4] Anderson P.M, Fouad A.A, Power Control and Stability, The lowa State University, Press.1982.

[5] Muh Budi R Widodo, Imam Robandi, “Optimization of Fuzzy PIPD Controller for Excitation System Stability Analysis on Single Machine Infinite Bus (SMIB) using Genetic Algorithm (GA)” ICAST. 2009

[6] Muh Budi R Widodo, Muhammad abidil-lah, Imam Robandi,” Optimal Design

Page 78: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

71

Load Frequency Control On Multi Area Power System Using Interval Type-2 Fuzzy PI Controller“. APTECS-2009. Paper number: 034.

[7] Muhammad Abdillah, “Desain Optimal Fuzzy Logic Load Frequency Control pada Sistem Tenaga Listrik Menggunakan Artificial Immune System Via Clonal Selection. Tugas Akhir, Jurusan Teknik Elektro ITS, 2009.

[8] Jawad Talaq and Fadel Al-basri, Adapteve gain sceduling foe load frequency control, IEEE Transc. on Power Systems, Vol 14. No 1, February, pp. 145-150, 1999.

[9] Charles E. Fosha, Jr., and Olle I.Elgerd, The Megawatt-Frequency Control Problem: New approach via Optimal Control Theory, IEEE Trans. Vol. PAS. No.4, April 1970, pp.563-577.

[10] Ceyhun YILDIZ, A. Serdar YILMAZ, Mahmet BAYRAK, “Genetic Algorithm based PI Controller for Load Frequency Control in Power system”. Proceding of 5th International symposium on Intelegent Manufacturing System, may 29-31,2006:1202-1210

[11] Jerry M.Mandel, Robert I. Bob John,” Type-2 Fuzzy Sets Made Simple”, IEEE, April, 2002.

[12] Satrio haninditho, design Automatic generation control (AGC) dengan unit superconducting energi magnetic storage (SMES) pada sistem tenaga menggunakan Fuzzy Proportional integral (Fuzzy PI), unpublised.

[13] Detailed modeling of superconducting Magnetic Energy Storage(SMES) system. Transaction om power dilivery, vol 21, no 2, november 1992.

NOMENCLATURE

Δ : the deviation A12 : rate area capacity on area 1 and area 2 B1 : bias frequency factor on area 1 B2 : bias frequency factor on area 2 D : load damping constant D1 : load damping constant on area 1 D2 : load damping constant on area 2 f1 : frequency output on area 1 f2 : frequency output on area 2 E : error signal from the system Ed : DC voltage applied to conductor

: the error signal rate

: the rating change from the error signal Id : the current trough into the coil I0 : initial SMES current Ki : integral gain : integral gain discrete domain KId : constant gain of Id feedback Kp : proportional gain : proportional gain in discrete domain KSMES : constant gain of SMES Ku : constant gain of Pi controllr L : inductance of SMES M1 : inertia equivalent on area 1 M2 : inertia equivalent on area 2 nT : continues-time frequency domain P12 : load exchange between area 1 and area

2 PL1 : non frequency load change on area 1 PL2 : non frequency load change on area 2 Pm1 : mechanical power on area 1 Pm2 : mechanical power on area 2 Pref1 : load reference from prime over on area

1 Pref2 : load reference from prime over on area

2 PSMES : SMES power R1 : speed drop of governor Rc : commutating resistance T : time sampling period T12 : synchronous torque coefficient TCH1 : time constant of thermal turbine on 1-

th TCH2 : time constant of thermal turbine on 2-

nd TDC : time constant of converter TG1 : time constant of governor on area 1 TG2 : time constant of governor on area 2 u : output of PI controller Y1 : governor output on area 1 Y2 : governor output on area 2 z : discrete-time frequency domain α : firing angle δ1 : rotor angle on area 1 δ2 : rotor angle on area 2 ω1 : speed of rotor on area 1 ω2 : speed of rotor on area 2

Page 79: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

72

Dependence of biodegradability of xenobiotic polymers on population of

microorganism

Masaji Watanabe (1) and Fusako Kawai (2) (1) Graduate School of Environmental Science, Okayama University, Okayama, Okayama Prefecture, Japan

(Email: [email protected]) (2) R & D Center of Bio-based materials, Kyoto Institute of Technology, Kyoto, Kyoto Prefecture, Japan

(Email: [email protected]) ABSTRACT Biodegradation of polyethylene glycol is studied. Analysis is based on exogenous type depolymerization model. Techniques developed in previous studies were used to determine the molecular factor of degradation rate. The time factor of degradation rate is determined by assuming a logistic-type growth of the microbial population which utilized degraded monomer units as the sole carbon source. KEYWORDS Biodegradation; polyethylene glycol; mothematiocal model; numerical simulation; differential equation. I.INTRODUCTION Microbial depolymerization processes are generally classified into two types. Those two type are exogenous type and endogenous type. In an exogenous type depolymerization process, monomer units are truncated from terminals of molecules stepwise. In an endogenous type depolymerization process, molecules are split at arbitrary positions. In this study, a mathematical model for exogenous type depolymerization processes is analyzed to investigate biodegradation of polyethylene glycol (PEG).

In previous studies, mathematical techniques are developed for analysis of exogenous depolymerization processes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. In this study, analysis of exogenous type depolymerization processes of PEG is continued. Techniques developed in previous studies are used to determine the molecular factor of degradation rate. The time factor of degradation rate corresponds to the microbial population which utilizes degraded monomer units as the sole carbon source.

II. MODELING OF EXOGENOUS DEPOLYMERIZATION PROCESSES Let t and M be the time and the molecular weight respectively. Suppose that an M -molecules is a molecule with molecular weight M. Let w(t,M) represent the total weight of M -molecules present at time t, and L be the amount of weight loss due to one cycle of exogenous depolymerization. In an exogenous depolymerization process of PEG, a molecule is first oxidized at its terminal, and then an ether bond is liberated. It follows that L = 44 (CH2CH2O). Figure 1 shows the weight distribution of PEG before and after cultivation of the microbial consortium E1.

Let be the degradation rate, which is the rate of the weight conversion from the class of M -molecules to the class of (M – L)-molecules at time t. Then and

satisfy

(1) This equation is associated with the initial condition

(2)

. ,, yLM

MLMtxMtdtdx

Mtwx , LMtwy ,

,,0 MfMw

Mt,

Page 80: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

73

Figure 1. Weight distribution of PEG before and after cultivation of the microbial consortium E1 [3, 6, 8, 11]

where is the initial weight distribution. Given an additional weight distribution at time ,

(3) equation (1), the conditions (2) and (3) form an inverse problem to find the degradation rate for which the solution of the initial value problem (1), (2) also satisfies the condition (3). III. MOLECULAR FACTOR AND TIME FACTOR OF DEGRADATION RATE A time factor of the degradation rate such as microbial population affects the molecules regardless of their size. Then the degradation rate is the product of time factor and molecular factor , and the equation (1) becomes

(3) Let , where and let and Then the equation (3) becomes

(4) Given the initial weight distribution , equation (4) is associated with the initial condition

(5)

Mf

Tt

,, MgMTw

Mg

Mt,

t M

.

yLM

MLMt

xMtdtdx

t dss0 ,

MtwMW ,,

MWX , ., LMWY

. YLM

MLMXMddX

.,0 MfMW

Mf

Page 81: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

74

Figure 2. Molecular factor of the degradation rate based on the weight distribution before and after cultivation of the microbial consortium E1 for three and five days [3, 8, 11].

Given an additional weight distribution

at ,

(6) equation (4) and the conditions (5) and (6) form an inverse problem to find the degradation rate for which the solution of the initial value problem (4) and (5) also satisfies the condition (6). The inverse problem was solved numerically for the initial weight distribution and the weight distribution after three days of cultivation shown in Figure 1. A numerical result is shown in the figure 2.

In the biodegradation process of PEG shown in the figure 1, the microbial population is the only time factor, and the monomer units truncated from in the exogenous depolymerization process is the sole carbon source. Then the time factor is regarded as the microbial population, and the total

amount of the monmer units consumed by the microorganism at time is The growth rate of the microbial population is proportional to and satisfies the equation

(7) where and are positive constants. Equation (7) is associated with the initial condition

Mg

,, MgMW

t

t

.,0 dMMtwMtA

,,

1110

dMMtwMh

Ath

,

,11

0

dMMtwM

hkdtd

h k

Page 82: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

75

Figure 3. Transition of the weight distribution of PEG after cultivation for three days.

(8) IV. SIMULATION OF EXOGENOUS DEPOLYMERIZATION PROCESS OF PEG BASED ON COUPLED MODEL Once the time factor and the molecular factor of the degradation rate are found, the transition of the weight distribution can be simulated by solving the initial value problem (3), (7), (2), (8). In previous studies, the initial value problem was solved numerically for a certain set of prameter values of , , and . In this study, exogenous depolymerization process of PEG is also simulated for the following values of parmeters [3, 8].

.700 ,0.2 ,0.0298270 hk Numerical results together with experimental results are shown in the figures 3 and 4. Those figures show the weight distribution of PEG after cultivation for three and five days. The

corresponding time evolution of the microbial population. V. CONCLUSIONS Change of microbial population was taken into consideration in modeling of exogenous depolymerization process. Under constant supply of carbon source, growth of the microbial population would be logistic with a constant carrying capacity [12]. However under limeted resource, the carrying capacity decreases as microbial population consumes the carbon source. This scenario was incorporated into modeling of the equation (7). Comparison between the numerical results and the experimental results shown in the figure 3 and 4 indicates that the model is appropriate for experimental setting. In practice, biodegradability should also be dependent on other factors such as temperature, dissolved oxygen, etc. Once these essentials are incorporated into the time factor of the degradation rate, the exogenous

.0 0

h k 0

Page 83: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

76

Figure 4. Transition of the weight distribution of PEG after cultivation for five days. depolymerization model should be applicable to assess the biodegradability of xenobiotic polymers in the environment. ACKNOWLEDGEMENTS The authors thank Ms Y. Shimizu for her technical support. This work was supported by JSPS KAKENHI 20540118. REFERENCES [1] F. Kawai and M. Watanabe and M.

Shibata and S. Yokoyama and Y. Sudate, Experimental analysis and numerical simulation for biodegradability of polyethylene, Polymer Degradation and Stability, 76, 129 - 135, 2002.

doi: 10.1016/S0141-3910(02)00006-X [2] F. Kawai and M. Watanabe and M.

Shibata and S. Yokoyama and Y. Sudate and S. Hayashi, Comparative study on biodegradability of polyethylene wax by bacteria and fungi, Polymer Degradation

and Stability, 86, 105 - 114, 2004. doi: 10.1016/j.polymdegradstab.2004.03.015

[3] M. Watanabe and F. Kawai, Study on effects of microbial population in degradation process of xenobiotic polymers, Submitted.

[4] M. Watanabe and F. Kawai, Numerical simulation of microbial depolymerization process of exogenous type, Proc. of 12th Computational Techniques and Applications Conference, CTAC-2004, Melbourne, Australia in September 2004, Editors: Rob May and A. J. Robert, ANZIAM J., 46, E, C1188 - C1204, 2005. http://anziamj.austms.org.au/V46/CTAC2004/Wata

[5] M. Watanabe and F. Kawai, Mathematical study of the biodegradation of xenobiotic polymers with experimental data introduced into

Page 84: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

77

Figure 5. Transition of the weight distribution of PEG after cultivation for seven days.

analysis, Proceedings of the 7th Biennial Engineering Mathematics

and Applications Conference, EMAC-2005,

Melbourne, Editors: Andrew Stacey and Bill Blyth and John Shepherd and A. J. Roberts, ANZIAM J., 47, C665 -C681, 2007. http://anziamj.austms.org.au/V47EMAC2005/Watanabe

[6] M. Watanabe and F. Kawai, Mathematical analysis of microbial depolymerization processes of xenobiotic polymers, Proceedings of the 14th Biennial Computational Techniques and Application Conference, CTAC2008, ANZIAM J., 50, C930 - C946, 2009. http://anziamj.austms.org.au/ojs/index.php/ANZIAMJ/article/view/1465

[7] M. Watanabe and F. Kawai, Modeling and simulation of biodegradation of

xenobiotic polymers based on experimental results, BIOSIGNALS 2009, Second International Conference on Bio-inspired Systems and Signal Processing, Proceedings, Porto - Portugal, 14-17 January, 2009, 25 - 34, INSTICC Press

[8] M. Watanabe and F. Kawai, STUDY ON EFFECTS OF MICROORGANISM IN DEPOLYMERIZATION PROCESS OF XENOBIOTIC POLYMERS BY MODELING AND SIMULATION, Proceedings of the First International Conference on Bioinformatics, Valencia, Spain, January 20 - 23, 2010, Editors: Ana Fred and Joaquim Filipe and Hugo

Page 85: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

78

Figure 5. Evolution of microbial population

Gamboa, 2010 INSTICC - Institute for Systems and Technologies of Information, Control and Communication, 181 – 186, 2010. [9] M. Watanabe and F. Kawai and M.

Shibata and S. Yokoyama and Y. Sudate, Computational method for analysis of polyethylene biodegradation, Journal of Computational and Applied Mathematics, 161, 1, 133 – 144, December, 2003. doi: 10.1016/S0377-0427(03)0051-X

[10] M. Watanabe and F. Kawai and M.

Shibata and S. Yokoyama and Y. Sudate and S. Hayashi, Analytical and computational techniques for

exogenous depolymerization of xenobiotic polymers, Mathematical Biosciences, 192, 19 – 37, 2004. doi: 10.1016/j.mbs.2004.06.006

[11] M. Watanabe, F. Kawai, STUDY ON

EFFECTS OF MICROORGANISM IN BIODEGRADATION OF XENOBIOTIC POLYMERS BASED ON MODELING AND SIMULATION, Proceedings, International Conference on Environmental Research and Technology (ICERT 2010), 2-4 JUNE 2010, PARKROYAL PENANG, MALAYSIA, 442-448.

[12] J.D. Murray, Mathematical Biology, Second Corrected Edition, Springer –Verlag, Berlin, 1989

Page 86: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

79

PROTOTYPE OF VISITOR DISTRIBUTION DETECTOR FOR COMMERCIAL BUILDING

Sukarman, Suharyanto, Samiadji Herdjunanto

(1)Sekolah Tinggi Teknologi Nukir-BATAN (Polytechnic institute of Nuclear Technology-BATAN)

Jl. Babarsari Po.Box 6101 YKBB Post-code 55281 Telp. 0274. 484085 Fax 0274.0274 489715

Email:[email protected], [email protected] (2), (3)Departemen of Electrical Engineering, Gajah Mada University Yogyakarta

Jl. Grafika Bulak Sumur Yogyakarta ABSTRACT The prototype of visitors distribution detector for commercial building, aiming to be able to know the distribution pattern of people on commercial buildings as a function of time, so building manager can make a visit report distribution patterns, which make electric energy saving optimization to reduce cost. Energy savings made by regulating the temperature of the air-conditioner, base on density of the visitors. To monitor the distribution of visitor use sensors Passive Infrared (PIR), it was placed 2x3 sensor array with an area of 50 m2 sized room 5.77 m x 8.66 m. These sensors placed on the roof / ceiling pointing downwards with a maximum range of detection distances as high / far as five meters at an angle of 300. The output of this sensor array is connected to Microcontroller ATMEGA 8535 as a controller to communicate with a remote control and network modules (modules network) using NM7010A, LF. Output Module-LF NM7010A network switch connected to the switch Hub so that data can be displayed to the network and can be done with computer data acquisition. To acquire the data used LabVIEW software by the National Intrument (NI). The result showed that this prototype sensor with a size of 2x3 array can effectively detect the spread of visitors with a room size area of 50 m2 with a maximum distance of 5 meters. Control action to change the temperature setting on the remote control air conditioning work in accordance with the pattern of distribution of people / visitors. KEYWORDS microcontroler, network module, sensor array. I. INTRODUCTION The purpose of research are getting a prototype for detection of commercial distribution of visitors in the building, getting data acquisition of distribution visitors on commercial buildings, getting a data

distribution pattern daily that is shaped in the form of user interface graphics,and getting of air conditioning temperature control baseds on the distribution pattern of visitors. According to regulation, Number 30rd, 2007 on Energy, chapter 25, "Energy conservation is the responsibility of national governments, local governments, employers, and society." In this case is an energy-efficient, with regard to rational, sensible and efficient without compromising comfort and productivity. Accordingly, one of the government policy to overcome the energy crisis was the issuance of Presidential Instruction No. 2 / 2008 on Saving Energy and Water is expected to further accelerate the implementation of energy saving and water because the current energy crisis is not only experience but also the water crisis. Energy and water savings are not only required for government agencies, but also the industrial sector, business / commercial and household. Included in the category of the business sector / commercial such as shopping centers, malls, private offices, hotels, and entertainment venues. According to the criteria of ASEAN, a building energy classified lowcost if the maximum energy consumption 150 kWh per year per square meter. While the energy use of 150-200 kWh per square meter per year are classified saving, 200-250 kWH per year per square meter belonging to ordinary, and more than that including energy wasteful buildings. Based on a survey conducted by the Indonesian Association of Building Physics (IAFBI), the average building in Jakarta to spend 310 kWh per square meter per year and no more than 10% alone, buildings in Jakarta

Page 87: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

80

are using the default number is close to the energy [12]. Office buildings, industrial and commercial buildings, is a great place to absorb a lot of electricity, whether it is for lighting, fans, water pumps, material processing, material handling, refrigerator, air-conditioning system (Air conditioning), elevators and motors other drivers. Energy consumption in buildings consist of lighting, air conditioning, elevators, and so forth [7]. Recorded almost a quarter of annual world energy supply in the late 80s and nearly two-thirds of this energy in the supply of fuel oil and gas reserves are concerned about the age of no more than 100 years (30 years for Indonesia) (World Energy, 1991). Evaluated from the equipment used, the energy consumption for air conditioning (AC) reached 72% of the total electricity usage[16], plus in the early planning is always exceeds capacity by 10-15% based on consideration for future development and at the request of the owner of the building[24]. That is, there is wasteful investments because the air conditioner is designed from the needs exceed the capacity of the building. Electric energy waste occurs when the AC is not being to handle peak load, or when they're low load (partial load), air conditioners have to work at full load, so that power consumption by the AC remains relatively high [1,14, 20]. Efforts for energy saving in air conditioning systems based on research that never existed [21,13], such as: adjustable fan speed, refrigerant flow rate, indoor temperature control (thermostat), and water flow control is carried out experimentally and simulation. In conventional air conditioning systems, motors only know of two conditions based on temperature settings. If room temperature is greater than the setting temperature of the motor will be operated (On) and vice versa will not operate (Off) if temparatur space smaller than the setting temperature. Saving the system will be obtained at an interval of the motor does not operate. The more frequent occurrence of fluctuations due to the smaller cooling load will be the ability to save energy [20]. An air conditioning (AC) is necessary because it will provide the comfort factor, occupational health, productivity, aesthetics related to limitations of space and community

relations. Cool weather conditions will provide a comfortable atmosphere for visitors who come, because it conditions the air to be adjusted by the number of visitors. If the visitor a bit, then the temperature is not too cold. Logically, if the crowds, temperatures would increase. To achieve a comfortable spot, ie a state of thermal comfort. At this point the air temperature, air circulation and cleaning does not bother about human performance. Thermal comfort standards for tropical countries ranging from 260C with 50-60% humidity [4].. Based on a survey of existing commercial buildings in Yogyakarta include AMPLAZ, some Batik Mirota Bookstore and showed that the air conditioner is turned on is not scheduled. Therefore the energy used for air conditioning are used continuously at low temperatures between 16-20 oC since the building opened at 9:00 pm until 20:30 o'clock pm. if the burden of a large room / number of visitors quite a lot of the air conditioning load becomes heavy, especially the unequal distribution of visitors as a result the air-conditioner working at full load. This means the electrical energy used to supply engine cooling fresh air to be great. Efforts to save energy use electricity for air conditioning, particularly commercial buildings of which have been made based on the distribution of visitors a function of time, although other factors also influence such as building materials, infiltration, humidity and so forth [23]. If the distribution can be known that it will be able to obtain the benefits that many of them can know the place or location where the most visitors, can then be digelarnya the activities that the time is right (effective), and automatic data requests will be stored as a data logger that provides efficient traffic on the part of the building management. The important thing is to make use of data distribution management, management of visitors for the benefit of electricity, especially the engine cooling air-conditioning settings. All that will have an impact on increasing revenue / sales for the supermarket / mall. Although the AC there are sensors that are used to measure and regulate the conditions at room temperature. However, this sensor is sensitive and accurate measure of temperature is only around in / AC machines, but not accurate to measure the room temperature varies with the number of visitors.

Page 88: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

81

Existing sensors on the AC machine is not sensitive and can not be used to determine the distribution or the number of visitors. To be able to know the distribution of visitors in a room then used the PIR (passive infra red) sensor. PIR sensors are infrared sensors that can detect the presence of people (due to temperature changes people) in a room / building. Energy conservation should be implemented because if the energy price increases continuously, the threat of rising rates and soaring world oil prices will affect the basic needs of the institution / manager. In this condition then the manager should try to identify and explore the possibility of energy management that can be done. Based on the above description as an electric energy saving measures in commercial buildings especially for electrical energy consumption in refrigeration (AC) which it needs to be based on the optimal distribution patterns of visitors in the space / building, therefore the following will be described in further detail. II. THEORY BASIS Pulse radiation sensors may be divided in two basic groups: photon detectors and thermal detectors, in which the radiation absorbed is first converted into heat that, in turn, causes a measurable effect. The main difference between photon and thermal detector lies in the fact that the first measures the radiation power, and the latter – the radiation energy. Among thermal detectors the following examples may be mentioned: pyroelectrics, thermocouples, thermistors, and bolometers. Particularly important group of elements are pyroelectrics. Pyroelectric sensor as such is a capacitor formed by depositing metal electrodes on a both surfaces of the thin slice of pyroelectric material. The absorption of the radiation pulse of power P(t) by the pyroelectric material results in a change in its temperature with the value of ΔT, which causes a polarisation change, and, consequently, results in displacement of electric charges in the pyroelectric material, hence the displacement current Ip(t) occurs [1]. It is possible therefore to accept the interpretation, that the capacitor of pyroelectric sensor is loaded from current

source Ip(t), induced by heat flux absorbed by the sensor [2, 3]. The equivalent circuit in Fig.1 illustrates this interpretation.

Figure 1. Equivalent circuit of pyroelectric

sensor and preamplifier Assuming the uniform structure of the pyroelectric material and uniform heating, the current Ip(t) may be determined from relationship:

SdtdTPIP (1)

Where, P pyroelectric coefficient

dtdT

speed of temperature changes of the

pyroelectric material S surface area of the sensor electrode.

Let us assume that there are no additional heat losses, the duration of measured radiation pulse tI is very short and it meets inequality:

CEtI (2)

E electric time-constant of the whole electric circuit:

)(. Ed

Ed

Ed CCRRRRE

(3)

C thermal time-constant of the sensor. Then it is possible to assume that the speed of temperature changes of the pyroelectric material is a linear dependence of the power of radiation P(t) falling on the sensor – relationship:

PdScdt

dT

d ...

(4)

Page 89: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

82

absorption coefficient of radiation falling

on the sensor dc specific heat of the pyroelectric material

density of the pyroelectric material surface of the sensor electrode d thickness of the pyroelectric material slice

relationship (4) into relationship (1), we obtain formula (5) describing the intensity of displacement current Ip(t):

dcPrtP

dcPI

ddP ..

.1),(..

.

(5)

We obtain relation,

PrIP .1 (6) The value of output voltage signal is given by

dttIC

U P )(1 (7)

Passive Infra Red Sensor (PIR) sensor ,which is the smallest type of KC7783, is already equipped with fresnel lens and have the output (output) digital. its use does not require the signal conditioner circuit (signal conditioners) are complicated[11]. PIR sensor has two pieces of the sensor element connected in series[2,3,5,6,8,9,10]. This arrangement would eliminate the signals caused by vibration, temperature changes and sunlight. PIR sensor is applied more widely [15,17,18,22]. A passing through the sensor will activate first and second sensor elements as shown in the figure 2.

Figure 2. a digital signal output of PIR sensor.

Figure 3. Directions reach PIR sensors with and without

caps protective. Figure 3, shown range sensor without caps protection. Figure 4, shown internal circuit PIR sensor, which is packaged into module.

Figure 4. block diagram of the workings of the PIR sensors

Fresnel lens serves to limit the energy of the incoming light and temperature as well as a filter. IR filter works to filter out certain wavelengths are allowed to pass. Pyroelectric sensor works based on changes in environmental heat. When changes in heat will be generated output voltage, the stress is

Page 90: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

83

strengthened by the amplifier. Finally, the output voltage of 5 volts was obtained after cross on a comparator. RESEARCH METHOD Materials required in this research include are 1. UTP cable, this cable used to connect

between the PC Hubs and Switches. 2. LabVIEW software, used to acquire data

from the module TCP / IP Starter kit 3. PonyProg2000 software - Serial Device

Programmer Version 2.07c Beta, used to download programs to the microcontroller module ATMEGA8535.

4. Basic Compiler software, used to program the microcontroller ATMEGA 8535.

5. Fiber glass (mica), 30x20 cm to place the PIR sensor 7783R KC.

6. PC (personal computer) with the specifications: minimum Pentium II Processor with 128 MB mimimum RAM, 10 GB hard disk minimum. This computer is used for functions to acquire data, store data and control the means and displaying the historical data.

7. Module TCP / IP starter kit with an Ethernet port facilities.

8. ATMEGA8535 Microcontroller module, which is used to process and build a data communication module TCP / IP starter kit.

9. Remote control used for transmit data to the system air conditioner (AC).

SYSTEM DESIGN Layout design of distributed detection system of visitor commercial buildings. Figure 5. PIR sensor layout indoor commercial building

Every sensor on figure 5, PIR sensor with protective, need the distance-one with other sensors as wide as 2.89 meters. For the space with an area 50 is required as many as six pieces meter2 sensors arranged in a matrix (3x2) PIR sensor.

Figure 6. PIR sensor detection area n the figure 6, indicated the scope of the sensor is facing downwards direction with angle 300 with the guards, so that for a sensor can cover a distance of 2.89 meters..

Figure 7. diagram block of detection system. In the figure 7, is shown a block diagram of data acquisition from the PIR sensor array, the microcontroller module, network modules, actuators and personal computers. Mikrokontroler used to collect data and mengontorl data to actuator. Microcontroller is used to retrieve data and mengontorl data to the actuator. Network module is used to transmit data distribution of visitor information on a computer display. Remote control is used as an actuator that will send

8.66 m

5.77 m

portA.3 portA.5

North

South

A

Port A.0

portA.4

A

portA.2 portA.1

A

A A A

5 mtr

1.4 mtr 7.2 mtr

60o

Array Sensor

PIR (3x2)

Modul Mikrokontroler

ATMega

Network

Mod

computer

Switch hub

Actuator (remote Control)

Mode low

Mode normal

6 2

Tombol suhu Up

Tombol suhu Dn

Page 91: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

84

data to the engine coolant based on the distribution of visitors in a room. The personal computer was used to record the distribution of the data distribution of visitors every time. RESULTS AND DISCUSSION In table 1, it appears that the distance sensor with an object (visitors) that varies from 30 cm to 600 cm. Shows that at a distance of 30 cm - 500 cm, the sensor can still detect the object.

Tabel.1. PIR individual sensor output as a function of distance at 30o angle coverage

Proximity detection (cm)

Output sensor

comment

30 5.09 High 60 5.09 High90 5.09 High120 5.09 High150 5.09 High200 5.09 High250 5.09 High300 5.09 High350 5.09 High400 5.09 High450 5.09 High500 5.09 High550 0.0 Low 600 0.0 Low

That is shown by the output voltage of 5 volts, while the output of 0 volts means that sensors can not detect the object.

Figure 8. space plan

In the figure 8, shown by the graph display to monitor the status of the space already visited. Light green color shows that there are visitors and the color blue of the air conditioner which indicates the engine coolant is at a lower temperature than the previous set.

Figure 9. data recording of visitor.

In the figure 9, shown graphic user interface display data recording visitors to the entire room (the area until the area-0-5). These data will be stored at any moment into a memory to be processed further. III. CONCLUSION Distribution with Sensor Detector Passive Infrared (PIR) has a maximum range as far as five meters at an angle 0o with no additional protection. On detection of a large angle without additional protection, the PIR sensor can detect objects larger than a smaller angle of detection. PIR Sensor 2x3 array with no additional protection can detect an area of 112.5 m2 with a size of 12.09 x 08.06 meters and at an angle of detection 600. Meanwhile, if the additional protection can be used to detect the room with an area of 50 m2, with a length of 8.66 meters and 5.77 meters wide at the angle of detection 300. This prototype detector distribution, can be used to determine the distribution pattern of visitors on commercial buildings everytime, so it could get benefits for the manager of a commercial building. The air conditioning temperature settings can be done automatically (not manually) based on the pattern of distribution of visitors, thereby saving can be obtained from the use of electric energy consumption due to air conditioner.

Page 92: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

85

REFERENCES [1]. Arismunandar, Wiranto, 1995, “Penyegar

Udara”, penerbit Pradnya Paramita Jakarta .

[2]. Brox, E Steven, 1991, Passive Infrared / Acoustic Pool Security System, United States Patent

[3]. Carr, JJ, 1993, Sensor and Circuits: Sensors, transducers, and supporting circuits for electronic instrumentation, measurement, and control, PTR Prentice Hall, New Jersey.

[4]. Elyza, Rizki, dkk, 2005, Buku Panduan Efisiensi Energi di Hotel, ISBN 979-98399-2-0, Jakarta

[5]. Fraden, S. J. , 1997, Handbook of Modern Sensors, 2nd ed., NY: AIP Press.

[6]. Gran et al, 1995, Infra-Red Sensor System For Intelligent Vehicle Highway System, United State Patent, USA

[7]. Karyono,Tri Harso, 1999, Arsitektur Kemapanan Pendidikan kenyamanan dan Penghematan Energi, PT Catur Libra Optima, Jakarta, Jakarta, h..126.

[8]. Kumada, Akira, 1993, InfraRed Detector With Phyroelectric Detector Element And Chopper Control Circuit, United State Patent

[9]. Lang, S. B ,1974, Sourcebook of Pyroelectrocity (Gordon and Breach, London)

[10]. Lee, Wade, 1994, Wide Angle Passive Infrared Radiation Detector, United States Patent

[11]. Mohd Syaryadi, Agus Adria, dan Syukurullah, 2007, sistem kendali keran wudhuk menggunakan sensor pir Berbasis mikrokontroler at89c2051,jurnal rekaya elektrika, Volume 6 nomor , .hal 14-20

[12]. Nasution, H. dan Mat Nawi Wan Hassan, 2004. Hemat energi pada sistem “air conditioning” sebagai upaya mengatasi krisis energi di Indonesia. Proc. of the International Scientific Meeting Indonesian Student 2004, September, United Kingdom.

[13]. Nasution, H, 2005. Aplikasi Kendali Logika Fuzzy Pada Sistem Pendingin

Bangunan Sebagai Upaya Penghematan Energi.Padang.

[14]. Nasution, H. dan Mat Nawi Wan Hassan, 2005. Energy saving for air conditioning by proportional control, variable and constant speed motor compressor. Proc. of the 2nd International Conference on Mechatronics 2005, pp. 492-498, May. Kuala Lumpur.

[15]. Odon, Andrzej, 2001, processing of Signal of pyroelectric sensor in laser Energy meter, Measurement Science Review, Volume 1, number 1, Polandia.

[16]. Qureshi, T. Q. dan Tassou, S. A., 1996. Variable speed capacity control in refrigeration systems, J. Applied Thermal Engineering,Vol.16, No.2, pp.103-113.

[17]. Schwarz, Frank, 1996, InfraRed Detector For Detecting Motion And Fire An Alarm System Including The Same, United State Patent

[18].Sheffer, Eliezer A, 1992,Pattern Recognizing Passive Infrared Radiation Detection System, United States Patent

[19]. Sumanto, , 2000 “ Dasar-dasar Mesin Pendingin”, Penerbit Andi Yogyakarta.

[20]. Sumeru dan Sutandi, T., 2007. Penghematan energi pada mesin pendingin dengan variasi putaran kompresor, Jurnal Teknik Mesin, Vol.7, No.2, pp.80-86. Institut Teknologi Sepuluh November.

[21]. Tojo, K., Ikegawa, M., Shiibayashi, M., Arai, N. dan Uchikawa, N., 1984. A scroll compressor for air conditioners. Proc. of the 1984 International Compressor Engineering Conference at Purdue, pp. 496-503, July, Purdue.

[22]. Wijaya, Handoyo 2008, Aplication Note DT51 team Innovative electronic, Surabaya

[23]. Wilbert F.Stoecker, Jerold W. Jones, 1989, ” Refrigerasi dan Pengkondisian Udara”, Cetakan kedua, Airlangga, Jakarta, 1989.

[24]. Yu., P. C. H. , 2001, A study of energy use for ventilation and air-conditioning systems in Hong Kong, PhD Thesis, The Hong Kong Polytechnic University.

Page 93: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

86

APPLICATION ANFIS FOR NOISE CANCELLATION

Sukarman (1)Sekolah Tinggi Teknologi Nuklir-Badan Tenaga Nuklir Nasional , Yogyakarta, DIY

(Email: [email protected]) ABSTRACT APPLICATION ANFIS FOR NOISE CANCELLATION. The objective of Adaptive noise cancellation is to filter out an interference component by identifying a linier model between a measureable noise and the corresponding unmeasureable interference. Adaptive noise cancelation using linier filter has been successfully in real-world application such as interference canceling in electrocardiograms (ECGs), echo elimination on long-distance telephone transmission lines, and antenna side-lobe interference canceling. Therefore, it can expand the concept of linier adaptive noise cancellation into nonlinear realm using nonlinear adaptive system. In this experiment, it will show how ANFIS can be used to identify an unknown nonlinear passage dynamic that transforms a noise source into an interference component in a detected signal. Information signal x(k) and a measureable noise source signal n(k), noise source goes through unknown nonlinear dynamics to generate a distorted noise d(k), which is then added to x(k) to form the measureable y(k). our task is to retrieve the information signal x(k) from overall otput signal y(k), which consist of information signal x(k) plus d(k), a distorted and delay n(k).The result show that ANFIS did a good performance to remove unknown distorted noise signal d(k) from the measured signal y(k). KEYWORDS Anfis; adaptive system; nonlinear system; . I.INTRODUCTION The objective of Adaptive noise cancellation is to filter out an interference component by identifying a linier model between a measureable noise and the corresponding unmeasureable interference. It can expand the concept of linier adaptive noise cancellation into nonlinear realm using nonlinear adaptive system[7,10]. In this experiment, it will show how ANFIS can be used to identify an unknown nonlinear passage dynamic that transforms a noise source into an interference component in a detected signal. Information signal x(k) and a measureable noise source signal n(k), noise source goes through

unknown nonlinear dynamics to generate a distorted noise d(k), which is then added to x(k) to form the measureable y(k). our task is to retrieve the information signal x(k) from overall otput signal y(k), which consist of information signal x(k) plus d(k), a distorted and delay n(k).The result show that ANFIS did a good performance to remove unknown distorted noise signal d(k) from the measured signal y(k). II. ANFIS SYSTEM ANFIS is the Adaptive Neuro-Fuzzy training of Sugeno-type Fuzzy Inference Systems. ANFIS uses a hybrid learning algorithm to identify the membership function parameters of single-output, Sugeno type fuzzy inference systems (FIS). A combination of leastsquares and back propagation gradient descent methods are used for training FIS membership function parameters to model a given set of input/output data [14,15]. In 1993 Roger Jang, suggested Adaptive Neuro Fuzzy Inference system [7,11] (ANFIS). ANFIS can serve as a basis for constructing a set of fuzzy ‘if-then’ rules with appropriate membership functions to generate the stipulated input-output pairs. Here, the membership functions are tuned to the input-output data and excellent results are possible. Fundamentally, ANFIS is about taking an initial fuzzy inference (FIS) system and tuning it with a back propagation algorithm based on the collection of input-output data. The basic structure of a fuzzy inference system consists of three conceptual components [4,6,16]: A rule base, which contains a selection of fuzzy rules; a database, which defines the membership functions used in the fuzzy rules; and a reasoning mechanism, which performs the inference procedure upon the rules and the given facts to derive a reasonable output or conclusion. Adaptive noise cancellation was first proposed by Widrow and glover in 1975[1].

Page 94: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

87

Filter out an interference component was identified by a linier model between a measureable noise and the corresponding unmeasureable interference. Adaptive noise cancelation using linier filter has been successfully in real-world application such as interference canceling in electrocardiograms (ECGs), echo elimination on long-distance telephone transmission lines, and antenna side-lobe interference canceling[2]. Noise cancellation is a variation of optimal filtering that is higly advantageos in many application.

The usual method of estimating a signal corrupted by addition noise is to pass the composite signal through a filter that tends to suppress the noise while leaving the signal relatively unchanged. The design of such filters is the domain of optimal filtering, which originated the pioneering work of wiener and was extended and enhanced by the work of Kalman, Bucy, and others. Filter used for the foregoing purpose can be fixed or adaptive[8,9]. The design of fixed must be based on prior knowledge of both yhe signal and the noise, but adaptive filters have the ability to adjust their own parameters automatically, and their design regquires little or no prior knowledge of signal or noise characteristics [3,12]. Its obviously that it can expand the concept of linier adaptive noise cancellation into nonlinear realm using nonlinear adaptive system. In this section, it will show how ANFIS can be used to identify an unknown nonlinear passage dynamic that transforms a noise source into an interference component in a detected signal. Under certain conditions, the proposed approach is sometimes more suitable than noise elimination techniques based on frequency selective filtering. Figure 1, show the schematic diagram of ideal situation to which adaptive noise cancellation can be applied. Here it has an unmeasureable information signal x(k) and a measureable noise source signals n(k). the noise source goes through unknown nonlinear dynamics to generate a distorted noise d(k), which is then added to x(k) to form the measureable output signal y(k). Our task is to retrieve the information signal x(k) from overall output signal y(k). which consist of information signal x(k) plus d(k), a distorted and delayed version of n(k).

Figure 1. Schematic diagram of noise cancellation without ANFIS Filter

in symbols, the detected output signal is expressed as

),...)2(),1(),(()()()()(

knknknfkxkdkxky

(1)

The function f(.) represents the passage dynamics that the noise signal n(k) goes through. If f(.) were known exactly, it would be easy to recover the origin information signal by subtracting d(k) from y(k) directly. However, f(.) is usually unknown in advance and could be time varying due to changes in the environment. Moreover, the spectrum of d(k) may overlap that of x(k) substantially, invalidating the use of common frequency-domain filtering techniques. Figure 2. Schematic diagram of noise cancellation with

ANFIS Filter.

To estimate the distorted noise signal d(k), it need to pick up a clean version of noise signal n(k) that is independent of information signal. However, it cannot access the distorted noise signal d(k) directly since it is an additive component of the overall measureable signal y(k). fortunately, as long as the information signal x(k) is zero mean and not correlated with noise signal n(k), it can use the detected

f

X(k) Information signal

n(k) Noise Source (measureable)

y(k)=x(k)+d(k) detected signal (measurable)

d(k) Distorted Noise (not measureable)

f

X(k)

n(k)

y(k)=x(k)+d(k)

d(k)

ANFIS (k x(k)=x(k)+d(k)- (k))

+ +

+

-

Page 95: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

88

signal y(k) as desire output for ANFIS training, as shown in figure (2).

The output of ANFIS be denoted by . The learning rule of ANFIS tries to minimum the error, shows as

2

2

22

),..1(),1(),((ˆ)()(

)(ˆ)()(

)(ˆ)()(

knknknfkdkx

kdkdkx

kdkyke(

2)

Where f is function implemented by ANFIS. Since x(k) is not correlated with n(k) or its history, ANFIS has no clue how to minimize the error component attribute to x. in the other word, the information signal x serves as an uncorrelated “noise” component in the data fitting processing, so ANFIS can do nothing about it except pick up its steady state trend. Instead, the best that ANFIS can do is to minimize the error component attributable to d(k). That is,

and this happens to be the desired error measure, it is as if it coulds measure d(k) directly. Equation (2) can expand to

)(ˆ)(2)()(2)(ˆ)()()(222 kdkxkdkxkdkdkxke (3)

Because )(kx is not correlated with )(ˆ kd , it become xdEddExEeE 2)ˆ( 222 (4)

If x(k) is random signal with zero mean, then ANFIS has no way to model it and

approaches zero as n goes

to infinity. This implies E[x )] = 0, and it has

(5) Where, is not affected when ANFIS is adjusted to minimize . Therefore, training ANFIS to minimize the total error

is equivalent to minimize

, such that the ANFIS function

in a least square sense. That x(k) is the information which want to recover, but it also serves as additive “noise” in ANFIS training. To simplify the problem, assumed that 1. X(k) is a zero signal for all k 2. It has fixed premise parameters and update

consequent parameters of ANFIS using Least Square method.

Assume 1, implies that it can obtain perfect training data that are subject only to measure noise. Assume 2, states that it is using ANFIS with linear parameter only. Even with perfect training data, ANFIS (with modifiable linear parameter only) would produce a fitting error e(k) equal to difference between a desire output and the ANFIS output. This error term is attributable to measure noise and or modeling errors. III. SIMULATION RESULTS In the experiment, it applied ANFIS to two nonlinear passage dynamics of order 2 and order 3, respectively. Unknown nonlinear passage dynamics were assumed to be defined as

(7) Where n(k) is a noise source and d(k) is resultant from the nonlinear passages dyhnamis f(.) attribute to n(k) and n(k-1). Figure (11) displays function f(.) as 3-dimensional surface [13]. Since f(.) is unknown, it use ANFIS to approximation this function under assumption that it does know that f(.) is of order 2. It assume that information signal x(k) is expressed as (7)

Where k is a step count, and the sampling period is equal to 5 us. Figure 3 demonstrate x(k) when k runs from 0 to 1000 (or when

Page 96: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

89

time runs from 0 to 5 s). it assume that the measureable noise is Gaussian with zero mean and unity variance, as shown in figure 4. The resulting distorted noise d(k) produced by nonlinear dynamics in equation (7) is shown figure 5. The measureable signal at receiving end, denote as y(k) is equal to sum of x(k) and d(k) and is demonstrated in figure 6. due to the nonlinear passage dynamics of f(.) and the large amplitude of d(k), it is hard to correlate y(k) and x(k) in the time domain.

Figure 3. information signal x(k).

Figure 4. noise signal n(k).

Figure 5. distorted noise signal n(k).

Figure 6. measured output signal y(k).

Figure 7 until figure 10 demonstrated the spectral density distribution of x(k), n(k), d(k) and y(k), respectively, for first 256 point. Obviously, spectra of x(k) and d(k) overlap each other considerably, making it impossible to employ frequency domain filtering techniques to remove d(k) and y(k). To use ANFIS in this situation, it collected 500 training data pairs of the following form:

(8) With k runs from 1 to 500. It used a four rule ANFIS to fit the training data, in which each of the two input was assigned two generalized bell membership function.

Figure 7. power spectral density of x(k).

Page 97: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

90

Figure 8. power spectral density of y(k).

Figure 9. power spectral density of n(k).

Figure 10. power spectral density of d(k).

Figure 12 is ANFIS surface (.) after 20 epoch of batch learning. Figure 13 is scatter plot of the training data. Figure 14 is RMSE curve through 20 epoch. The starting point of RMSE curve show the error when only the linear parameter have been indentified by LSE. By also updating the nonlinear parameter, it was able to decrease the error further.

Figure 11.actual nonlinier passage dynamics f(.)

Figure 12. ANFIS function

Figure 13. Training data dsitribution

Page 98: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

91

Figure 14. RMSE Curve

Figure 15-18 show MFs before and after training, reflecting changes in premise (nonlinear) parameter. Note that the error cannot be minimize to zero, the minimum error is regulated by information signal x(k), which appears as fitting noise.

Figure 15. MFs before training for n(k)

Figure 16. MFs before training n(k-1).

Figure 17. MFs after training for n(k)

Figure 18. MFs after training for n(k-1).

By using ANFIS, the estimated resultant from the nonlinear passage is expressed as

, as shown in figure 19. thus, the estimated information

signal , derived as y(k)- , shown in figure 20. the difference between x(k) and

,is shown in figure 21. Note that, is already fairly close to x(k). estimated error in figure 21 is expected to decrease if more training data used over more training epochs.

Page 99: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

92

Figure 19. ANFIS Output )(ˆ kd

Figure 20. Estimated Signal )(ˆ kx

Figure 21. Estimated error Signal )(ˆ kx - )(kx

Figure 22. Original signal )(kx .

IV. CONCLUSIONS ANFIS has applied for Noise Cancellation, although information signal added by signal Noise. Information signal x(k) and a measureable noise source signal n(k), noise source goes through unknown nonlinear dynamics to generate a distorted noise d(k), which is then added to x(k) to form the measureable y(k). Information signal x(k) from overall otput signal y(k), which consist of information signal x(k) plus d(k), a distorted and delay n(k). The error cannot be minimize to zero, the minimum error is regulated by information signal x(k), which appears as fitting noise. The result show that ANFIS did a good performance to remove unknown distorted noise signal d(k) from the measured signal y(k). In the future works, ANFIS will be applied to the actual external source signal. REFERENCES [1]. Anonim. 1995. Fuzzy Logic Toolbox User

Guide. The Mathworks, Inc. Apple Hill Drive, Natick.

[2]. B.Widrow and J. R. Glover, Adaptive Noise Cancelling: Principles and Aplications. IEEE Proceeding, 63:1692-1716, 1975.

[3]. B. Widrow and D. Strears, Adaptive signal processing, Prentice hall, upper saddla river NJ, 1985.

[4]. Fauset, L. 1994. Fudamental of Neural Network,

[5]. Hanselman, D. Littlefield, B. 1996. Mastering Matlab, A Comprehensive Tutorial and 6 Reference. Prentice Hall, Upper Saddle River, New Jersey.

Page 100: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

93

[6]. George J. Klir and Bou Yuan, 1995, Fuzzy set and Fuzy logic, Theory and Applications, Prentice Hall PTR, upper saddle River, new Jersey 07458, USA.

[7]. J.S,R.Jang; C.T Sun;E mizutani,, 1997, Neuro_fuzzy and Soft Computing, A computational Approach to Learning and Machine Intelligence,Page 523-533.Prentice Hall, USA

[8]. S.S. Haykin, adaptive filter theory, prentice hall, upper saddle river, NJ, 2nd edition, 1991

[9]. Ronaldy dkk, 2003, noise Reduction Menggunakan Filter Adaptif Dengan Algoritma Recursive Least Square (RLS). Skripsi Jurusan Sistem Komputer, Universitas Bina Nusantara.

[10].C P Kurian, S Kuriachan, J Bhat and R S Aithal , “ An Adaptive neuro-fuzzy model for the prediction and control of light in integrated lighting schemes”, Lighting Research & Technology, Volume 37, Number 4, 2005, pp 343-352.

[11].Meng Joo Er, and Zhengrong Li, “Adaptive noise canceling: Principles and

applications,” Proc. IEEE, vol. 63, no. 12, pp. 1692–1716, 1975.

[12]. Widrow, B., Stearns, S.D. 1985. Adaptive Signal Processing. Prentice Hall Inc, Englewood Cliffs, New Jersey.

[13].Heikki Koivo, 2000, ANFIS (Adaptive Neuro-Fuzzy Inference System) By Heikki Koivo, 2000

[14].L.R.D.Suresh, S.Sundaravadivelu, 2006, Engineering Letters, 13:3, EL_13_3_8 (Advance online publication: 4 November 2006), Real Time Adaptive Nonlinear Noise cancellation using Fuzzy Logic for Optical Wireless Communication System with Multi-scattering ChannelMember.

[15].Howard Demuth, Mark Beale , Martin Hagan , 1992, Neural Network Toolbox User’s Guide, The MathWorks, Inc., USAs

[16]. Robert Fuller, 1995, Neural Fuzzy Systems, ISBN 951-650-624-0, ISSN 0358-5654, Turku Center for Computer Science, 1995

Page 101: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

94

THE STABILITY OF THE MECD SCHEME FOR LARGE SYSTEM OF ORDINARY DIFFERENTIAL EQUATIONS

Supriyono

Polytechnic Institute of Nuclear Technology National Nuclear Energy Agency

Jl. Babarsari Kotak Pos 6101/YKBB Yogyakarta - Indonesia Email : [email protected]

Abstract The Stability of The MECD Scheme for Large System of Ordinary Differential Equations. The paper introduce a Modification of extrapolation for solving large systems of second order ordinary differential equations. Some basic problems on this method including stability are analyzed from the theoritical points of view. We considered the application MECD (Modified Extrapolated Central Difference) scheme mainly to linear systems derived from a finite element approximation. The MECD scheme is explicity expressed as a second order difference equations. The MECD scheme has almost the more accuracy as the ECD (extrapolated central difference scheme), but The speed of computation in ECD is about twice of MECD. The upper limit of the time mesh t to ensure that the stability of the MECD scheme in the sense of

energy is about 2 times that of CD (Central Difference). Therefore 2 is the best possible estimate for the ECD and 2 is the best possible estimate for the MECD. Key Words Ordinary Differential Equations, Extrapolation, Modification, Finite Element, Method, Stability Introduction Solving initial value problems for large systems of second order ordinary differential equations is one of the most important problems in scientific computation. In the stepwise integration of these problems, the discretization of the time derivatives has to be carefully treated, since in the practical computation several hundreds or thousands steps are necessary to obtain useful information and in such process, the accumulation of truncation error may have serious influence on the results. The purpose of the present paper in to propose the use of an extrapolation technique to improve the stability of ECD and MECD scheme analyzed from the theoritical points of view. In particular, it is shown that an extrapolation applied to the one step central

difference (CD) scheme (the so-called scheme [4], choosing 1 ), which is called the extrapolated central difference (ECD) scheme in this paper. As an efficient method which can be used in such a case we presented in [6] a method, which is derived by applying the extrapolation technique to central difference solutions. We call the Modified extrapolated central difference (MECD). The application of the technique of extrapolation to ODE’s has along history and a lot of research papers which indicated its efficiency has been published. We rever, for example, to D.C.Joice survey paper [5] for details and its references. Nevertheless, as far as the authors know, it is not so widely utilized in the actual computation of very large systems in science and engineering. This is probably due to that these research do not necessarily take the size or special character of the systems into account. As a consequence, the interest is limited mainly to the accuracy and the discussion has been directed to the behavior of the extrapolation as the extrapolation step size tends to zero. The idea of the extrapolation is simple. Consider a one step method to solve an initial value problems. Let t be the basic step size and y be the approximate value obtained

by using the step size ,,, 42ttt

Assume that the error permit an expansion of the form

42

211 aayy

We seek an approximate value 1y for 1y from the equations 2

11 tayy t ;

22

111

2)( tayy t

Then since

)(431 4

121 tOyyyy tt

Page 102: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

95

the error reduces to )( 4tO . The value 1y is the approximate value obtained in one extrapolation. If we use the approximate value

4ty and add the )(O term, then we can get a higher extrapolation with higher accuracy. After introduction, in section 2 we introduce a one-step integration scheme equivalent to the standart CD scheme, which is known scheme. In section 3 the simplest extrapolation formula for the CD scheme is derivied. Section 4 is devoted to give a difference expression for the solution of the ECD scheme. In section 5 we give a sufficient condition to ensure the stability of the MECD scheme in the sense of energy. Derivation of one-step scheme equivalent to the central difference scheme. Consider an initial value problem for N ordinary differential equations )(" tfKyMy (1) derived from a finite element approximation of the equation of motion to describe a linear elastic vibration. Here M and K are the mass and stiffness matrices, respectively. We assume that M is diagonal and K is symetric and positive definite. The right-hand of (1) is assumed to be zero in this paper, since this term has no essential influence on the following discussion. We also assume the inverse inequality [2]

yyMK h

C0

yy

KKy ,

2

(2) Where h is a parameter to denote the size of the finite element. In order to apply the extrapolation technique we rewrite the central difference scheme to a one step scheme known as “ scheme”. Introduce KMA 1 to write (1) as Ayy " (3) The central difference approximation of this system is

yt

yyyi

iii A

211

2 ,3,2,1i

(4)

introduce zi ),3,2,1( i by 0'0 yz

yyzz iiii tA 11 2

1 , ,3,2,1i

(5) Theorem 1 : The sequence yi

determined by

(4) satisfies

yzyy iiiiAtt 2

1 21

(6)

,3,2,1i

provided y1is given by

yzyy Att0

2001 2

1 (7)

Substitution of (6) into (5) leads to the equation

zt

yAtItAzi

ii

AI

2

1

21

22

12

1 2

Therefore the central difference equation (4) is written in matrix form as

zyQz

yi

it

i

i

1

1 ;

AIAItA

tIAI

tttQ t 2

212

21

21

22

1

2 (8) provided y1

is determined by (7). The

computation proceeds as follows. )0('0 yz

zyy iiitAtI

2

1 21

,3,2,1i

)(21

11 yyzz iiii tA

Note that the computation of the type Ayi,

which includes the computation of the stiffness matrik K, appear only one time at each step if the vector Ayi

is stored..

Extrapolation of the central difference scheme. In order to derive an extrapolation formula we have to know the asymptotic expansion of the error. In our case, however, the extrapolation formula is easily determined.

Page 103: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

96

Assume that

zyU

0

00 is given as an

initial value of the CD scheme and let

zyU

1

11

and

zy

U1

11

be the values

after t with step-size t and 2t ,

respectively, that is,

VQU t 01 ; VQU t 0

2

21

(9)

Att

AtAt

tAt

t

QAIt

AtIAI

t

24

2

55

23

3

24

2

2

2

321

21

1281

163

81

321

21

Taking the relations )0('0 yz and

Ayy " into account we have, from equation (9),

)0("21)0('

21

2

0

00

2

01

yty

tA

tyzytyy

ztzyAtytyy

At

A

0

3

0

0

24

0

2

01

81

321

21

ytytytyy t

4

0

4

3

0

3"

0

2'

00

321

81

21

,

where

dtdy k

kk y )0(

0 .

Therefore hold

)(96

124

1)(

)(24

16

1)(

5

4

0

43

0

3

1

5

4

0

43

0

3

1

tytyty

tytyty

O

ty

O

ty

(10) Similarly we have

ytytytytyzzytytyzz

t

t

6

0

55

0

4

4

0

33

0

2"

001

4

0

33

0

2"

001

1281

321

163

21

41

21

Therefore hold

tytytz

tytytz

O

ty

O

ty

5

5

0

44

0

3

1

5

5

0

44

0

3

1

961

481)('

241

121)('

(11)

if we set UUzyU 11

1

11

431

then by (10) and (11) the terms of tO 3 and

tO 4 vanish, and we have

tU O

tyty

5

1 '

The vector U 1 is the next approximate value in our extrapolation. The integration scheme consisting of the process U 0

UQU t 01 ;

UQU t 0

2

1 U 1

is called the extrapolated central difference (ECD) scheme in this paper. Finite difference expression of the extrapolated central difference scheme. The vector yi

determined by the

above process do not satisfy the central difference equation (4). In this section we seek a difference equation which governs the extrapolated solution. Let zyU iii 111

,

be the extrapolated value determined by the starting value zyU iii

, :

UQU iti

1 ;

UQU iti

2

21 ;

UUU iii 111 431

.

Page 104: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

97

Each component is determined as follows.

zy

Att

AtAt

tAtt

zy

i

i

i

i

AItA

A

tIAI

24

2

35

23

324

2

1

1

241

21

961

61

61

241

21

(12) The extrapolated central difference scheme is expressed in the following form. Theorem 2 : The vector yi

determined by

(12) satisfy the following difference equations.

yAtyAtt

yyy

ii

iii

A1

3422

211

24121

121

2

(13)

Since KA M 1 we have

01

24121

121

11

1

24

12

2

2

yKMKt

yMtt

yyy

i

iKIK

iiiM

(14) This is the difference expression of the extrapolated central difference scheme. Note that a little decreasing of stiffness and a very little dumping are caused by the extrapolation. See also (15). Equation (14) is called the extrapolated central difference equation. Modified Extrapolation of The Central Difference Scheme The developing of the ECD method process as follows. Let zyU 000 , and

zyU 111 , be the initial vector and the next

vector to be obtained in one step of the extrapolation, respectively. In this extrapolation process three intermediate vectors V 0

, V 1 and V 2

are introduced as follows :

000

002

0

0

00

21

21

21

tAptAyz

tzAyyqp

V t

100

002

0

1

11

41

41

21

81

tAptAyz

tzAyyqp

V t

211

112

1

2

22

41

41

21

81

tAptApq

tqAppqp

V t

Then U 1 is determined by

02

02

1

11 43

143

1

qq

ppzy

U

(15) The vectors to be calculated in this step are

0Ay , 0Ap , 1Ap , 2Ap . Here we note that the

vectors 0p , 1p and 2p Stability of the extrapolated central difference scheme. It is well known that the central difference scheme (4) is stable only under a

condition Cht

1

for a certain constan C1which depends on

the constant C0in the inverse inequality and

on elastic constants [3]. Is this condition sufficient or relaxed for the extrapolated scheme ? To discuss this problem we introduce a matrix D defined by

KD Mt 12

121 and write (14) as

follows :

0121

11

2

22

2

yyKD

yDtyyy

ii

DIKiiiMi

(16) We first derive an energy inequality for this equation. The inner product of the both sides of (16) and yy ii 11

yields

Page 105: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

98

yyyyD

yyDtyy

iiii

i

M

K

iDIKii

111

2

1

2

2

,21

211 ,

yyDtyy

iDIKiii

M

,1211 2

2

Summing this equation for ni ,,3,2,1 we have

n

iiiii

n

M

yyyyD

yyDtyy

K

nDIKnn

1111

2

1

2

2

,21

211 ,

yyDtyy

DIK

M

,02101

1

2

2

(17) To estimate the third term in the left-hand side we set yye iii 1

and write

eeDKee jiji ,, 2 ;

eee jii ,2

Since

n

iiiii

n

ii

n

iiii

eeee

eeee

1

221

212

1

1

2

11,

22

1 21

1

212

12

1 eeee ii

n

iii

Substituting this into (17), we have

yyyyD

yyDtyy

nnnn

n

M

K

nDIKnn

11

2

1

2

2

,41

211 ,

n

iliii yyyyDK

11111

2 ,41

yyyyD

yyDtyy

K

DIK

M

0101

2

1

2

2

,41

02101 ,

(18) Using the identity

babaab 222

21

21 , the above

equation can be written as follows.

yyDyyD

tyy

nn

n

M

DIKnDIK

nn

11

22

1

2

2

,2

1

21

1

,

n

iliii

n

yyyyD

yyyy

K

nn nDIK

111

2 ,41

1 121 ,

yyDyyDtyy

DIKDIKM

1

2

0

2

2

,21

021

21

01

,

yyyyDIK ,01 12

10

(19) Therefore, the extrapolated central difference scheme is stable if the following conditions are satisfied Condition (1) : DDIK 2

21 is

positive definite.

Page 106: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

99

Condition (2) :

zzDIKt

zM

,21

2

is positive for

any N-dimensional vector z. Theorem 3 : Let C0

be the constant appearing in the inverse inequality (2) and set

htC

0 then the following holds :

1. The central difference scheme is stable in the sense of energy if 2 (20)

2. The conditions (20) is sufficient for Condition (1) and Condition (2).

The conditions

131202

2

htC

(21) is sufficient for Condition (1) and the

condition

01

241

21

4

21

Ct

(22) is sufficient for Condition (2). As is seen in (22) the stability limit is relaxed by the extrapolation. To explain this fact, we rewrite (21) as follows.

021 4421 h ;

CC 10

1241

4

Be solving this inequality we have

h4

2

1611

4

(23)

Stability of the Modified Extrapolated Central Difference Scheme As is seen (22) the stability limit of MECD

scheme from (15) we rewrite as follows.

021 4421 h ;

CC 10

1144

54

Be solving this inequality we have

h4

2

1611

4

(24) Therefore, roughly speaking 2 (corresponding to 0161 4 h ) is the maximum value of for the MECD scheme, althought the above analysis gives only a sufficient condition of the stability. Comparing this result with (20) we can expert that the time mesh t can be taken by about 2 times larger than that of CD. References 1. Adam.A.R., “Sobolev Spaces”, Academic

Press, 1975. 2. Ciarlet.P.G., “Finite Element Methods for

Elliptic Problems”, North-Holland Publishing Company, 1978.

3. Fujii., “Finite Element Galerkin Method for Mixed Initial-Boundary Value Problems in Elasticity Theory”, Center for Numerical Analysis The University of Texas at Austin, 1971.

4. Hsu.T.R, “The Finite Element Method in Thermomechanics”, Allen and Unwin, Inc, 1986.

5. Joice.D.S, “Survey of Extrapolation Process in Numerical Analysis”, SIAM Review. Vol.13, No.4 (1971), 435-490.

6. Supriyono and Miyoshi.T, “A Modified Extrapolation Method for Large Systems of Second Order Central Difference Equations”, Japan Journal of Industrial and Applied Mathematics, Vol.12, No.3, October 1995, Kinokuniya Tokyo Japan, 439-455.

Page 107: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

100

Real Time Performance of Fuzzy PI+Fuzzy PD Self Tuning Regulator In Cascade Control

Mahardhika Pratama )1( , Syamsul Rajab )2( ,Imam Arifin )3( , Moch.Rameli )4( )4,3,2,1( Electrical Enginering Department,. Faculty of Engineering,. Sepuluh Nopember Institute of Technology,.

Surabaya,. Indonesia Email : [email protected]

Abstract This paper discussed real time performance evaluation of fuzzy PI+fuzzy PD self tuning regulator for controlling flow and pressure of pressure rig 38-714 in the cascade structure. The results were compared with conventional PI controller. We used fuzzy PI+fuzzy PD self tuning regulator for master controller and P controller for slave controller. Experimental evaluations were conducted with several load configuration including minimal load, nominal load, and maximal load. From overall those results could be concluded that improving controller was much faster, and robust responces than conventional PI controller. Keyword cascade control, fuzzy PI+PD, self tuning regulator.

I. Introduction

Cascade control is one of the most popular complex control structures that can be found in the process industries, implemented in order to improve the disturbance rejection properties of the controlled system [1]. On the other hand, Controllability of outside loop is improved because inside controller speeds up response. Therefore, Non linearity of process are handled by that loop and removed from the more important outer loop.

Using PID controllers are very difficult in the complex system especially for non linear systems, PID controller can’t be directly applied. Fuzzy set theory invented with Zadeh eliminates those problems. With very simple approached, it can overcome the problems in the complex process. Even without knowing dynamical model of the system, it can be used.

Heuristic approach, making suitable with human feeling, improves boolean on off principle. The problem on fuzzy logic is determining fuzzy parameter that just based on feeling of designer. Therefore, Combination between PID and fuzzy can be proposed to make pattern on resulted response.

Fuzzy PID appears into use on recently research. The weakness of this controller is needing more input (3 input), that can cause

complexness to determine rule and membership function. In order to deal with that problem fuzzy PI + fuzzy PD can be used, because we can decrease the complexness, however we can get the characteristic of Fuzzy PID controller.

Reference [2] introduces a kind of fuzzy controller with on-line self-regulation function. The controller is capable of automatically adjusting parameters according to the controlled system error, and has the features of strong adaptability. While main controller solve control task, other controller give self regulation online.

This paper is organized as a follows. Section 2 discussed about cascade control and fuzzy PI + fuzzy PD self tuning regulator. Section 3 discussed about system design including controller design, and experimental set up. In the Section 4, we spoke about experimental result and discussed the results taken from experiment. In the end of this paper, we gave some conclusions used for next research.

II.A Cascade control

Cascade control is one of complex control system structure that are often used. Using cascade structure will get many advantages. One of those is disturbance effecting inner loop can be detected more fastly than normal structure, because inner controller compensate it, before it can influence outer loop. Figure 1 is showing cascade contol architecture.

Figure 1. Block diagram of cascade control.

C1, C2 are outer, and inner controler. P2,P1 are inner, and outer process. Disturbance can come from inner or outer loop. The most important, remembered in cascade structure, is

Page 108: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

101

process inner loop must be much faster than outer loop. II.B Fuzzy PI+ Fuzzy PD

Fuzzy PI+ Fuzzy PD is as same as Fuzzy PID, however the main reason for avoiding fuzzy PID is more input needed than fuzzy PI+ fuzzy PD, therefore we have more complexity for determining rule base and membership function than fuzzy PI+ fuzzy PD[3].

A classical PI controller is described as :

dtteteKui

cpi )(1)((

(1)

With differential operator, we can get from Equation 1 :

))()(()(

ic

pi tedt

tdeKdt

tdu

))()1()(()1()(

isc

s

pipi keT

kekeKT

kuku

)(keK

eKui

ccpi

s

pipipi T

kukuu

)1()(

)1()( kukuuT pipipis

)()( 1 kuzkuuT pipipis

piupi

pi uz

Kku

11)( (2)

In the equation 2 is mathematical model for a fuzzy PI. A classical PD controller is given as follows :

))()((dt

tdeteKu dcpd (3)

In the discrete form, equation 3 can be changed as follows :

))1()(

)(()(s

dcpd Tkeke

keKku

)()()( keKkeKku cdcpd (4) Equation 4 is showing a fuzzy PD, fuzzy PI+ fuzzy PD is the combination of fuzzy PI and fuzzy PD.

)()()( kukuku pdpi (5)

Table 1 is showing the rule of fuzzy PI+fuzzy PD controller. We used triangular shape for 7 membership function in each input or output.

Table 1. Rule base of Fuzzy PI + Fuzzy PD E de

NB NM NS Z PS PM PB

NB NB NB NB NB NM NS Z NM NB NB NB NM NS Z PS NS NB NB NM NS Z PS PM Z NB NM NS Z PS PM PB PS NM NS Z PS PM PB PB PM NS Z PS PM PB PB PB PB Z PS PM PB PB PB PB

II.C Self Tuning Regulator

The major weaknest of fuzzy controller is in the steady state error. Mainly, it can not eliminate static error and the static error variation rate. Because of that, we overcome that problem based on actual output of plant.

We determine rule of self tuning regulator from plant output as can describe on graphic in Figure 2.

Figure 2. Graphic of error.

In the point a, error is negative big and

raise steady state if dtde >0. therefore we must

increase a number of u . In the point b, error is

negative smaller than in the pont a, and dtde >0,

so we must decrease a number of u to avoid excessive overshoot. In the point c, error is

possitive big and dtde < 0, in order to get steady

state, u must be increased. In the point d, error

is positive small and dtde < 0 to approach

steady state value u should be decreased. All rule base of self tuning regulators are given in Table 2.

We use five membership functions and triangular shapes. Figure 3 is exhibiting fuzzy PI+ fuzzy PD self tuning regulator structure.

Table 2. Rule base of self tuning regulator.

Page 109: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

102

e de

NB NS Z PS PB

NB PB PB PB Z ZNS PB PB PS Z NS Z PB PS Z NS NB PS PS Z NS NB NB PB Z NS NB NB NB

We use five membership functions and

triangular shapes. Figure 3 is exhibiting fuzzy PI+ fuzzy PD self tuning regulator structure.

III. System Design

In the experiment, we built cascade control configuration. In the inner loop, we controlled flow. In the outer loop, we controlled pressure. Figure 4 exhibits picture of pressure rig 38-714.

Figure 3. Fuzzy PI+Fuzzy PD self tuning regulator

structure.

Table 3. Controller parameters.

Kp Ki Kd Inner Loop 4,11 0,42 10,15 Outer Loop 0,25 - -

For designing fuzzy PI + fuzzy PD self tuning regulator in the outer loop and constructing P controller in the inner loop, we needed to identificate plant parameter with open loop configuration. After that, we used rule 1 of ziegler nichols law. Table 3 is showing controller parameters in the inner and outer loop.

Figure 4. Pressure rig 38-714.

IV. Experimental Results

Experiments are conducted to evaluate fuzzy PI+ fuzzy PD self tuning regulator. For comparison, we used pure PI controller. PCI 1711 from advantech was used for converting analog to digital data and digital to analog data. Software used in these experiments was Labview 7.1. Figure 5 is showing experimental set up for experiments.

In the inner loop, P signal from P controller changes from digital to analog to control flow, flow output detected with flow sensor converts from analog to digital data. Process in the outer loop are that control signal generated from fuzzy PI+PD self tuning regulator changes from analog to digital to control pressure, pressure sensor was detected pressure output converted from analog to digital data.

PI parameters was obtained with analytical method (Kp=0.56, Ti=32,5). Input from these experiments are 1,1 volt. Figure 6, and 7 are showing plant inner and outer loop responces with fuzzy PI+PD self tuning regulator as outer loop controller.

Figure 6. Experimental set up.

From Figure 7 and 8, we could conclude that proposed controller improved plant responces in the outer and inner loop. Responces with proposed controller was much faster than pure PI controller to raise steady state output. Even, Responce with PI controller didn’t attain steady state output on the 1000

Page 110: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

103

iteration Table 4. given output parameters of outer loop.

Figure 7. Outer loop responce.

Figure 8. Inner loop responce.

Table 4. Output Parameters of outer loop Fuzzy PI+PD

self tuning regulator

PI

Steady state output

1,1 -

Settling time (5%)

20 iteration -

Overshoot 10% -

Table 5. Output parameters of inner loop Fuzzy PI+PD

self tuning regulator

PI

Steady state output

1,548 -

Settling time (5%)

57 iteration -

Overshoot 2% -

Conclussions From these experiments, fuzzy PI+ fuzzy

PD self tuning regulator could improve performance of pressure rig 38-714, because it can make responce become much faster than using pure PI controller. Using fuzzy PI+ fuzzy PD reduced complexity of fuzzy PID in the obtaining rule and membership function. In addition to control structures, cascade control proved faster than normal structure in the rejecting disturbance. References [1] Arrieta,O, Vilanova.R, Belaguer,P.(2008). “ Procedure for cascade control Systems design : Choise of Suitable PID Tunings”. Int. J. of Computers, Communications & Control. ISSN. Vol. III (2008), No. 3, pp. 235-248. [2] Qian,Z, Xi-jin, G, Zhen,W, Xi-lan, T, (2006) “. Application of Improved Fuzzy Controller in Networked Control Systems ”. J. China Univ. of Mining &Tech. (English Edition). Vol.16, No.4. [3] Kumar,V, Rana,R,N,S, Gupta, V. (2008). “ Real Time Performance Evaluation of a Fuzzy PI + Fuzzy PD Controller for Liquid Level Process. INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMS. VOL. 13. NO. 2. 89-96.

Page 111: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

104

APPROXIMATION OF RUIN PROBABILITY FOR INVESTMENT WITH INDEPENDENT AND IDENTICALLY DISTRIBUTED RANDOM NET RETURNS AND MULTIVARIATE NORMAL MEAN VARIANCE

MIXTURE DISTRIBUTED FORCES OF INTEREST IN FIXED PERIOD

Ryan Kurniawan(1) and Ukur Arianto Sembiring(2) (1)Department of Mathematics, Universitas Pelita Harapan, Karawaci, Banten, Indonesia (Email:

[email protected]) (2) Department of Mathematics, Universitas Pelita Harapan, Karawaci, Banten, Indonesia (Email:

[email protected]) Abstract An investor can gain negative investment net returns that are obtained from several periods. The negative returns will force the investor to save some reserve fund for liability payment. We can determine the reserve fund needed to anticipate it with tail probability distribution from the sum of discounts of random investment net returns under random forces of interest. The problem is referenced from journal [1]. In [1], the forces of interest are Normal Inverse Gaussian (NIG) random variables, in which NIG is a member of multivariate normal mean-variance mixture (MNMVM) distribution class, while investment yields are independent Pareto random variables which have regularly varying property for their distribution tails. In this research, instead of Pareto distribution, we will use other distributions which also have regularly varying distribution tail (such as Burr distribution) for modeling investment net returns, and MNMVM distribution class for modeling forces of interest.

Key Words Investment yield; force of interest; normal mean-variance mixture distribution class; slowly varying function; ruin probability. I.INTRODUCTION This paper is motivated by [1] which focuses on the approximation of the tail probability of the sum of weighted random variables of the form

Pr (1)

where the weights, , are also random variables. The probability in Eq. 1 is the ruin probability of a company during periods from now, given the initial capital , where the random variables represent the investment net returns, and represent the discount factors. In [1], the approximation formula, which will now be called, the

discount formula, can be used under certain conditions, and its use has been discussed for Pareto distributed and lognormal-invers Gaussian (LNIG) distributed . In this paper, the authors will discuss the use of the formula for Burr and inverse Burr distributed , and for the , which have the distribution family of log multivariate normal mean-variance mixture (LMNMVM). The authors chose the stated distributions because of their large roles in financial studies.

Certain violations of the conditions required for the discount formulas will also be discussed. In this case, the authors chose the logmultivariate t distribution for . It will be shown that under such distribution, the discount formula cannot be used.

II.THEORY AND DEFINITIONS II.1. Discount Formula As was explained, the main focus in this paper is the approximation of the probability in Eq. 1. The discount formula was found in [1] under certain conditions for the random variables and . For the random variables , i.i.d. property has to be satisfied. Moreover, their distribution tail, , has to be regularly varying.

Definition 2.1. A function is regularly varying iff there exists some such that

(2)

where is a function such that for every

(3)

Page 112: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

105

If is regularly varying for some , denote .

Below is the theorem from [1] which contains the discount formula. Theorem 2.1. Let be i.i.d. random variables with distribution for some

and be random variables, not necessarily i.i.d. Then, Pr (4) if there exists a such that for every . “~” denotes an asymptotic relation for , meaning that the approximation will be more accurate as tends to infinity. II.2. MNMVM Distribution Class Definition 2.2. A random vector is MNMVM-distributed, denoted by

iff

(5) where

1. , where is a multivariate normal random variable, and is an identity covariance matrix, both of which are elements of

; 2. is a scalar nonnegative random

variable and is independent from 3. is a matrix, and ; 4. is a measurable

function. From the above definition, the MNMVM distribution class has the following properties. Fact 2.1. . Fact 2.2. If , then

, where and are parameter matrix and

vector, respectively. Below are the definition of the two members of MNMVM distribution class which will be discussed.

Definition 2.3. The random vector in Definition 2.1. is multivariate t-distributed, denoted by iff , where

is a parameter vector in , and , is an inverse Gamma

distributed random variable with density

. (6)

Definition 2.4. The random vector in Definition 2.1. is a generalized hyperbolic random variable, denoted by

iff , where and are parameter vectors in , and , is a generalized inverse Gaussian(GIG) distributed random variable with density

, (7) where is a modified Bessel function of the third kind with index III. MAIN RESULTS III.1. Theoritical Results For investment problem, in Eq. 1 is of the form

(8) for every , where is a random force of interest in period . Now, let for every , where ,

, and is a nonnegative scalar random variable. Let

be i.i.d. with distribution for some . Then, by Theorem 2.1 and Definition 2.2, Pr (9) under the assumption that the conditions in Theorem 2.1 are satisfied. Moreover, by Fact 2.2, , where

Page 113: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

106

and . Hence, by Fact 2.1,

(10)

(11) Through Eq. 9-11, the following proposition is obtained. Proposition 3.1. Let be i.i.d. random variables with distribution for some

and be MNMVM random variables, with is of the form given by Eq. 8 for every . If there exists a

such that exists for every , then Pr (12) In the case where is GH -distributed with

and ,

(13)

where and is a moment generating function of , with

and . Because exists for every , the condition of Proposition 3.1 is satisfied so that Pr

(14) where

(15)

Now, a violation will of the condition of Proposition 3.1 will be discussed. In this

case, is t -distributed. Note that for every ,

, (16) where , doesn’t exist. Hence, the discount formula cannot be used in this case. III.2. Numerical Results To look at how well the the approximation using the discount formula works, the authors simulate the probability in Eq. 1 by generating 5,000,000 data, each of which is the sum in Eq. 1 where is of the form given by Eq. 8 in the case where is GH -distributed with

, , , , , and , and

(17)

The numerical results are obtained in two cases. In the first case, are Burr-distributed with cdf (18)

is a member of . In the second case, are inverse Burr-distributed with cdf

(19)

is a member of . The numerical results are absolute relative errors of the discount formula approximations to the simulation results,

.

Page 114: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

107

Figure 1. Absolute relative error plot for Burr-distributed

with

Figure 2. Absolute relative error plot for inverse Burr-

distributed with

It can be shown from Fig. 1-2 that the error decreases as tends to infinity. This is consistent with the asymptotic relation used in the discount formula. Also, the error is relatively small. From Fig. 1, the maximum error is only 0.18, while from Fig. 2, it is 0.035. These show how well the formula works. III.3. Regularly Varying Property of Burr and Inverse Burr Distribution Tail It will be shown that Burr distribution with cdf (18) is a member of

, or equivalently, the function

(20) satisfies Eq. 3. Let be a positive real number, then,

. (21) Hence, Burr distribution is a member of

. Now, it will be shown that inverse Burr distribution with cdf (19) is a member of , or equivalently, the function

(22) satisfies Eq. 3. Again, let be a positive real number, then,

(23) Note that

(24) and

. (25) Hence, through Eq. 23-25, satisfies Eq. 3. Thus, inverse Burr distribution is a member of . IV. CONCLUSION The discount formula approximation has been tested for GH-distributed forces of interest and Burr and inverse Burr distributed net returns. The resulting approximation as has been observed in the simulation result is quite good. It is consistent with the asymptotic property of the approximation and the errors are small. However, not all of the member of MNMVM distribution class can

Page 115: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

108

be used as the distribution of the forces of interest to apply the formula. As has been seen, the multivariate t distribution violates the condition used in Proposition 3.1. Other distributions beside Burr and inverse Burr can also be used as the distribution of the net returns as long as they have regularly varying property for their distribution tail. However, the i.i.d. property of the net returns must also be taken into account as it is also one of the conditions required to use the formula. REFERENCES [1] Goovaerts, Marc J., et. al.(2005). “The

tail probability of discounted sums of Pareto-like losses in insurance,” Scandinavian Actuarial Journal, Taylor and Francis, pp 446-461.

[2] Kalemanova, Anna and Werner, Ralf (2006). “A short note on the efficient implementation of the Normal Inverse Gaussian distribution,” Scandinavian Actuarial Journal.

[3] Klugman, Stuart A., Panjer, Harry H.,

and Wilmot, Gordon E. (2008). Loss Models, New Jersey, John Wiley & Sons, Inc.

[4] Luciano, Elisa and Semeraro, Patrizia.

(2009). A Generalized Normal Mean-Variance Mixture for Return Processes in Finance, Collegio Carlo Alberto.

[5] McNeil, Alexander J., Frey, R diger,

and Embrechts, Paul (2005). Quantitative Risk Management, New Jersey, Princeton University Press.

Page 116: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

109

Stochastic History Matching for Composite Reservoir

Sutawanir Darwis1, Agus Yodi Gunawan2, Sri Wahyuningsih1,3, Nurtiti Sunusi1,4, Aceng Komarudin Mutaqin1,5, Nina Fitriyani1

1Statistics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Indonesia, 2Industrial and Financial Mathematics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Indonesia , 3Statistics Study Program, Faculty of Mathematics and

Natural Sciences, Universitas Mulawarman, Indonesia, 4Mathematics Department, Faculty of Mathematics and Natural Sciences, Universitas Haluoleo, Indonesia, 5Statistics Department, Faculty of Mathematics and Natural

Sciences, Universitas Islam Bandung, Indonesia

[email protected],

ABSTRACT The act of adjusting a model of a reservoir until it closely reproduces the past behavior of historical production is termed history matching. The accuracy of the matching depends on the quality of the reservoir model and the quality of production data. Once a model has been history matched, it can be used to simulate the future reservoir behavior with a high degree of confidence. Stochastic history matching is design to estimate reservoir properties such as permeability which are needed for forecasting reservoir production. The history matching can be formulate as a state estimation problem where the parameters of interest are added to the state and estimated with other states. A composite reservoir consist of two regions with different properties (permeability, porosity, viscosity, compressibility) and used as a model for fluid flow. A thin discontinuity is assumed to separate the two regions. Oil fluid is flowing through a porous medium. The diffusivity equation is considered as an important expression in petroleum engineering. The fluid flow modeling for composite reservoir is a nonlinear estimation problem. The ensemble Kalman filter (EnKF) was introduced as a way to extend the Kalman filter (KF) to nonlinear problem. This paper combines a sequential EnKF history matching with composite reservoir to provide an estimation of reservoir properties. The diffusion flow model, analytical solution, nonlinear state space model, estimation and prediction will be discussed. Two case studies are discussed to explain the methodology: EnKF infinite reservoir history matching and composite reservoir history matching. The case studies showed that the proposed method can be considered as an alternative of the previous known history matching methods Keywords composite reservoir; EnKF history matching

I. INTRODUCTION During history match, reservoir models are calibrated to improve forecast reliability. In most history matching applications, the parameters to be estimated consist only the static parameters. In EnKF, the static and dynamic parameters are updated simultaneously which introduces inconsistencies between the updating of static and dynamic (Gu, and Oliver, 2004). The outcomes of history matching motivates the development of stochastic framework. The EnKF has gained popularity as stochastic based methodology for history matching. Naevdal et al (2002) and Naevdal et al. (2007) discussed the application of EnKF on reservoir models. Modeling reservoir is a complicated task due to the fact that the information of reservoir properties is limited; parameters such as permeability and porosity is very uncertain. Therefore, the model must be improved from the available measurements by updating the model parameters. A composite reservoir is a system two concentric regions with a single well at the center (Demski, 1987). The two regions have different properties. Fluid flow through regions is represented by diffusivity equations. The solution pressure as a function of permeability yields a nonlinear model The linearity of state model is an important assumption of KF. It is frequently that the models are nonlinear such as the pressure as function of permeability in a composite reservoir. The EnKF was developed by Evensen (2007) for nonlinear models. The method uses Monte Carlo sampling to estimate the posterior distribution. EnKF can be used to integrate observations by updating sequentially a sample of reservoir modes during the simulation; each reservoir model is kept up to date as observations are assimilated.

Page 117: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

110

The mean and covariance of the prior are estimated using Monte Carlo samples, then used KF update formula to obtain the posterior distribution. . The history matching can be formulate as a state estimation problem where the parameters of interest are added to the state and estimated with other states. Although the static parameters (permeability, porosity) are not varying in time, the estimate of them are changing with the observations available (Chen, Oliver, and Zhang, 2009). This paper combines a sequential EnKF history matching with composite reservoir to provide an estimation of reservoir properties. The diffusion flow model, analytical solution, nonlinear state space model, estimation and prediction will be discussed. Two case studies are discussed to explain the methodology: EnKF infinite reservoir history matching and composite reservoir history matching. The case studies showed that the proposed method can be considered as an alternative of the previous known history matching methods II. METHODOLOGY The methodology starts with the sequential estimation based on Bayes inference. The composite reservoir model is developed using the basic homogeneous reservoir model, followed by the EnKF formulation for history matching problems Let X denote the unobservable (state) quantities and Y is the observations. The probability model can be factored into prior, data distribution, marginal and posterior f x | y f x f y | x / f y , where

f x is the prior quantifies a prior understanding of the unobservable based on historical information or from a forecast model, f y | x is the sampling distribution of the observation given the unobservable quantifies the measurement errors, f y is the marginal distribution known as prior predictive distribution, f x | y is the posterior known as the update of prior given the observation Y. Scalar. Let the prior of the univariate X be

2X ~ N , . Conditioned on X = x,

assume t noisy observations t

1 tY , , YY ; 2iY | X x ~ N x,

1 1ti

2 2 2 2 2 2i 1

yn 1 n 1X | y ~ N ,

The posterior mean can be written as

2 2 2

2 2 2 2 2 2

ny nE X | yn n

K y

Y y

The prior is adjusted according to gain

2

2 2

nK

n

. The prior variance is

updated according to 2Var X | Y y 1 K . Vector.

Assume that the state p 1X has normal

prior ~ N ,X μ P , and the observations

follows the model ~ NY | X = x Hx, R ,

where t pH is the observation matrix. The posterior distribution is given by

1 1~ N ,

N ,

t -1 -1 t -1 -1 t -1 -1X | Y = y H R H + P H R y + P μ H R H + P

μ + K y - Hμ I - KH P

Posterior distribution known as analysis step, μ is the forecast based on previously collected data, and P is the forecast error covariance. The prior μ is updated based on forecast error weighted by the gain K Sequential estimation. Define the following notations:

1:t 1 t 0:t 0 t, , , , , Y Y Y X X X . The analysis is then 0:t 1:t 0:t 1:t 0:tf x | y f x f y | x

where t

0:t 0 j j 1j 1

f f f |

x x x x

represent the prior of the state, 0f x is the initial state. The observations are

independent t

1:t 0:t j jj 1

f | f |

y x y x .

The analysis becomes sequential updating (estimate)

t

0:t 1:t 0 j j 1 j jj 1

f | f f | f |

x y x x x y x

. The result shows that as data becomes available, the previous estimate can be updated

Page 118: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

111

without having to start from the beginning. Kalman filter (KF). Consider the model

m mt t 1 t tX FX , ~ N 0,Q , and the

measurement o ot t t tY HX , ~ N 0,R .

The error covariance for analysis and forecast

tt|t 1:t 1:tP E X E X | y X E X | y

tt|t 1 1:t 1 1:t 1P E X E X | y X E X | y

. The forecast distribution

t 1:t 1 t|t 1 t|t 1X | y ~ N X , P has mean and

covariance t|t 1 t 1:t 1 t 1|t 1X E X | y FX ,

tt|t 1 t 1:t t 1|t 1P Var X | y Q FP F . The

analysis is given by

1t 1 1 t 1 1t|t 1 t t|t 1 t|t 1

t 1:t 1t 1 1t t|t 1

H R H P H R y P X ,X | y ~ N

H R H P

with mean and variance t|t t|t 1 t t|t 1X X K y HX ,

t|t t|t 1P I KH P ,

1t tt|t 1K P H H PH R

Ensemble Kalman filter (EnKF). EnKF is a Monte Carlo simulation in the context of KF. Monte Carlo samples are used to estimate the forecast using the nonlinear model. The mean and variance of forecast were estimated using Monte Carlo samples (ensemble), and then are used in KF update for analysis step. The forecast is given by

m

it 1:t 1 t t 1|t 1

i 1

1f x | y f x | xm

. The

update at time at time t is given by

m

it 1:t t t t|t 1 t|t 1

i 1

1 ˆf x | y f y | x N x , Pm

.

Assuming independent samples it 1|t 1x , i 1, , m , the basic steps are: (1)

i i m,i m,it|t 1 t 1|t 1 t tx f x , ~ N 0,Q ,

m ti i

t|t 1 t|t 1 t|t 1 t|t 1 t|t 1i 1

1ˆ ˆ ˆP X X X Xm 1

, t|t 1

mi

t|t 1i 1

1X Xm

(2) Update each

sample using KF

i i i it|t t|t 1 t t t t|t 1X X K Y HX ,

1t tt t|t 1 t|t 1

ˆ ˆK P H HP H R

,

it ~ N 0, R , i 1, , m

Radial flow (reservoir infinite acting) into a well, the pressure P depends on the radius r and time t

2

2

P 1 P c Pr r r .0002637 k t

wi i r r

2 rhk PP r,0 P ,P , t P , .001127 | qr

The solution is given by the expression

2

0 0i

70.61Q B 948 crP r, t P Eikh kt

0B is the oil formation volume (well fields

barrels/stock tank barrels), 0Q is the oil flow,

0 0B Q q is the flow rate, Ei is the

exponential integral t

x

eEi x dtt

Fluid flow through composite reservoir regions represented by the diffusivity equations:

1 1 1w2

1

P P P1 c r r Rr r r k t

2 2 22

2

P P P1 c r Rr r r k t

1 i 2 i 1 w wP r,0 P , P r,0 P , P r , t P

2 11 2 r R r R

1 2

P P k kP R, t P R, t , | | ,r r

2 iP , t P

Consider a semi infinite composite regions ,0 0, , the diffusivity equations are

21 1

21

P P1 - <x<0x k t

22 2

22

P P1 , x 0x k t

1 21 x 0 2 x 0

P Pk | k |x x

, 1 2P 0, t P 0, t

1 2 1P x,0 0, P x,0 0, P ,0 P

Page 119: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

112

The solutions are given by (Carslaw, Jaeger, 1980: 319)

1i

1i 0

1

2i 1 xerfc

2 k tP x, t P

2i 1 x erfc

2 k t

i2

i 0 1

2i+1 kx2PP x, t erfc1 2 k t

1

2

kkk

, 2

1

k kk

, 11

erfc x 1 erf x ,

2x

t

0

2erf x e dt , erf 1 ,

erf x erf x III. RESULTS AND DISCUSSION Almendral-Vazquez and Syversveen (2006) gave a review of the application of EnKF in the field of history matching of oil reservoir. The interest is in estimating permeability based on observations of the bottom hole pressure (BHP). However, in this paper the interest is in estimating the transmissivity T kh / and storativity S ch based on observations of bottom hole pressure. The state consist of the static (S, T) and the dynamic bottom hole pressure P(r, t);

X S T P r,t . The simulated bottom

hole pressure are obtained from P(r,t) at well radius wr r . The observations is the

observed pressure; obsY P r, t . The static parameters are modeled as

t t 1 t t 1S S ,T T ; kt t k t ; 0 0S ,T are drawn from a normal distribution. Define an interval [0, T], a set of observations was generated using true trueS ,T . The

observations at obst was perturbed

w obs true trueP r , t ,S ,T . The simulation

was set up for T = 5000, wr 0.1 ft ,

1.5 cp , 0Q 300 STB/day ,

0B 1.25 bbl/STB , 0.15 ,

iP 4000 psi , h = 15 ft, 6c 12 10 psi , t .5 hour . Figure 1 shows the

observations (represented by circles) and the evolution of transmissivity and storativity. The observations update the static parameters. The results shows that the EnKF converge to the true values after a number of iterations.

Figure 1. EnKF for transmissivity T kh / and

storativity S ch homogeneous reservoir. The results shows that the filter recover the true sorativity and

transmissivity after a certain number of iteration. Composite reservoir plays important role in reserve evaluation.. The interest is in estimating the permeability based on bottom hole pressure. The proposed composite model can be considered as reinjection process. The principal modes of transport is diffusion which causes diffuse process from high pressure to regions with lower pressure. Considered as one dimensional flow the flow from the injector to the well can represented by one dimensional flow channel. The basic solution for interpretation is

2

i

i 0 1

P x, t P x, t

2i+1 kx2P erfc1 2 k t

. The dynamic

of the reservoir is represented by the analytic solution or the simulator. The concept of fit is formalize as the difference between the

Page 120: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

113

simulated pressure simP x, t generated by the analytical solution/reservoir simulator and the pressure from observations obsP x, t

from 2P x, t using

1 2 truek , k ; sim obsP x, t P x, t . The

interpretation can characterized by three factors: (1) state space modeling and initial reservoir parameters distribution (2) characterization of prior distribution and (3) updating (posterior) process. The experiment was set up for T = 2000, ensemble size m = 100, number of observations n = 40, initial pressure 4000, time step t .5 , true permeabilities 1,true 2,truek 43,k 54 . Figures 2 – 5 show the results for composite reservoir varying the well position x = 100 and 75 m. For all of three cases there is a good match between the simulated and the true pressure, more ever the initial parameter distributions converge to the true values. For the cases studied, the EnKF estimate converges to the true parameter values, as expected. The experiment shows that EnKF is a promising method for updating reservoir properties such as permeability and porosity.

(a) Permeability region 1

(b) Permeability region 2

Figure 2. Permeability progress and pressure (simulated and observed) 200, x 100

(a) Permeability region 1

Page 121: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

114

(b) Permeability region 2

Figure 3 Permeability progress and pressure (simulated

and observed) 200, x 100

(a) Permeability region 1

(b) Permeability region 2

Figure 4 Permeability progress and pressure (simulated and observed) 200, x 100

(a) Permeability region 1

(b) Permeability region 2

Figure 5 Permeability progress and pressure (simulated

and observed) 200, x 75

Page 122: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

115

IV. SUMMARY AND CONCLUSIONS The ensemble Kalman filter is an iterative procedure suitable for nonlinear models such as composit reservoir models. The use of EnKF for updating reservoir models, analytically or numerically, using information from observations reamains a problem of intense interest. Currently, the equation for a three phase system should be solved by the simulator. An analytical approach is needed as a part of validation process. In this paper, investigate a special case, the analytical solution a single phase system was used in place of simulator. The applicability of the method has been studied through a synthetic examples. The experiment shows that EnKF is a promising method for updating reservoir properties such as permeability and porosity. A 2D radial composite reservoir models is a topic for further research. REFERENCES Almendral-Vazquez A, Syversveen A R, 2006, The Ensemble Kalman Filter – Theory and Application in Oil Industry, Norsk Regnesentral, Norwegian Computing Centre Carslaw H S and Jaeger J C, 1980, Conduction of Heat in Solids, Oxford Clarendon Press Chen Y, Oliver D S, and Zhang D, 2009, Data Assimilation for Nonlinear Problems by Ensemble Kalman Filter with Reparameterization, Journal of Petroleoum Science and Engineering, 66, 1-14 Demski J A, 1987, Decline Curve Derivative Analysis for Homogeneous and Composite Reservoir, Stanford University, California Evensen G, 2007, Data Assimilation: The Ensemble Kalman Filter, Springer Gu Y, and Oliver D S, 2004, History Matching of the PUNQ-S3 Reservoir Model Using the Ensemble Kalman Filter, SPE 89942, SPE Annual Technical Conference and Exhibition, Houston, 26-29 Sept 2004 Naevdal G, Mannseth T, and Vefring E H, 2002, Near-Well Reservoir Monitoring Through Ensemble Kalman Filter, SPE75235 SPE Improved Oil Recovery Symposium, Tulsa California 13-17 April 2002 Naevdal G, Bianco A, Cominelli A, Dovera L, Lorentzen R J, and Valles B, 2007, State Estimation of a Large Scale System in the Petroleoum Industry: the Ensemble Kalman Filter for Updating Reservoir models, 8th Int Synposium on Dynamics and Control of Process System, June 6-8 2007, Mexico

Wikle C K, and Berliner L M, 2006, A Bayesian Totorial for Data Assimilation, Physica D, doi:10.1016/j.physd.2006.09.017

Page 123: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

116

PROBABILITY ANALYSIS OF RAINY EVENT WITH THE WEIBULL DISTRIBUTION AS A BASIC MANAGEMENT

IN OIL PALM PLANTATION

Divo D. Silalahi SMART Research Institute, PT. SMART Tbk, Riau, Indonesia

E-mail: [email protected]

Abstract The three parameter Weibull Distribution can be used to determine the probability of rainy event number, both at the minimum, maximum, and average level. The three parameter Weibull Distribution has a symmetric pattern on density function of the data. High Potential of rainfall occurs during October to December, and the low potential of rainfall occurs on February and June. Therefore, this rainfall information can be used to estimate the early season of effectiveness planting and fertilizing in the field. At the minimum level of rainy event, the highest probability level larger than 99,06% occurs on April, at the average level the highest probability level larger than 53,77% occurs on September and the lowest 34,14% occurs on July. It means that the lowest error of weather predict will occurs on September and largest on July. Then at the maximum level of monthly rainy event the highest probability level larger than 4,97% occurs on November. Comparison results for P75 and Pmean, on March, April, and September has similar characteristics to the potentiallity of rainy event. This similar phenomenon also occurs on May and July. The Strength correlation between of P75 and Pmean is 0,914.

Keywords Three parameter Weibull distribution, correlation, goodness of fit test

A. INTRODUCTION

Probability value is very useful in the plantation sector, especially in oil palm plantation. Probability value can be used to estimate an event changing, such as rainy event which is used to predict the rain potential in a region estate. Probability of rainy event can also be used to determine the characteristics of rainfall both at the minimum, maximum, and average level as one of key factors in fertilizer management, the amount of evapo-transpiration, planting patterns adjustment, etc. Considering the rainy event always change, so it is necessary to use statistical analysis using a specify probability

distribution which has a representative and symmetric nature of data.

Weibull Distribution with three parameters is proposed to use for the probability of rainy event determination. This distribution is one of the probability distribution with the symmetric and representative parameters of an event changing. It has three parameters namely; A as a scale, B as a threshold and C as a shape parameter. This distribution can be used to determine the probability of rainy event at various levels of characteristic.

Therefore, in order to determine the rainfall potential in a region, it is necessary to apply a systematic study of the probability value by determination of rainy event number in a mathematical model.

B. MATERIAL AND METHOD

As a method to determine the probability of rainy event, was obtained data from the climate station in Division VII, Libo estate, Riau. The data was collected since 1991 to 2009. Figure 1. shows the frequency of rainy event at the climate station.

Rainy Event

22.0

20.0

18.0

16.0

14.0

12.0

10.0

8.0

6.0

4.0

2.0

Fre

quency

60

50

40

30

20

10

0

Std. Dev = 4.18

Mean = 8.9

N = 228.00

Figure 1. Frequency of rainy event since 1991 to 2009

As systematic estimation is used theoretical method in the three parameter Weibull distribution. Theoretical methods using maximum likelihood estimation method as a determinant estimated parameters of the distribution function. Then based on these

Page 124: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

117

distribution, normality test was made using Kolmogorov-Smirnov test.

B.1. Three Parameter Weibull Distribution

This distribution was introduced by Waloddi Weibull, a Swedish physicist, in 1939. As the name implies this distribution has three parameters, namely A as the scale parameter, B as a location parameter and C as a shape parameter.

Probability density function of random variable X as a Weibull Distribution density function (Bain, 1991) is given in the form:

1

X

exp ,B Xf (X)

0 ,others

C CC X B X BA A A

(1) Based on the probability density

function, the cumulative distribution function of the three parameter Weibull Distribution is expressed in forms:

x

1 exp ,X B, A 0, C 0F (X)

0 ,

CX BA

X B

(2) To determine the variant of the three

parameter as follows: 22 ))X(E()X(E)X(Var

222 11221 BC

ABC

A)X(E

, a variant that is:

22 1121

CCA)X(Var

(3)

B.2. Parameter Estimation with Theoretical Method

Theoretical method were used to estimate the parameters A, B, and C is the maximum likelihood method (Cohen, et al., 1984). Maximum likelihood estimates can be obtained by solving the first derivative equation of likelihood function in Weibull Distribution. The probability density function is expressed in form:

C-1 CNi i

i=1

C-1 CN N Ni i

i=1i=1

C -1 CN N NN -N -NC N i

i ii=1i=1 i=1

X -B X -BCL(A,B,C)= exp -

A A A

X-B X -BC= exp -

A A A

X-B=C A (X-B) A (X-B) A .exp -

A

(4) (2.12)

By using the natural logarithm of the likelihood function equation can be expressed in form:

C -1N NN -N -N C N

i ii= 1 i= 1

CNi

i= 1

K = L (A ,B ,C )

K = L n C A (X -B ) A (X -B ) A

X -Bexp -A

B.3. Goodness of Fit Test

Hypothesis testing is a procedure to make a decision about the rightness or wrongness of hypothesis that aims to achieve a decision to accept the hypothesis based on analysis of certain samples. One of the sample analysis to test the validity of theoretical distribution that have been assumed to be justified or refuted by statistical test for goodness of fit is Kolmogorov-Smirnov test.

C. RESULT AND DISCUSSION

C.1. Probability Analysis of Rainy Event Using Theoretical Methods

C.1.1. Description of Data Observation

After calculating the primary data of rainy event on January to December since 1999 to 2009, can be seen that there are some differences rainy event in every level of distribution.

0

2

4

6

8

10

12

14

16

18

20

22Frequency

Jan Feb M ar Apr M ei Jun Jul Aug Sept Oct Nov Dec

Month

Min

Mean

Max

Figure 2. Distribution of monthly rainy event on January

to December.

If be seen from the rainy event number, the lowest minimum spread number of monthly rainy event occurrs on January, February, May, and July. The highest average number of rainy event occurs in November. Then the maximum of rainy event number occurs in October and November. From this assumption, it can be known that the High Potential of rainfall occurs on October,

Page 125: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

118

November and Desember. While the low potential of rainfall occurs in February and June. Therefore base on this assumption. It can be identified the early season of effectiveness for planting and fertilizing in the field.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Frequency

Jan Feb M ar Apr M ei Jun Jul Aug Sept Oct Nov Dec

Month

Min

Mean

Max

Figure 3. Probability distribution level of rainy event in

minimum, maximum, and average.

At the minimum level of rainy event, the highest probability level larger than 99,06% occur on April, at the average level the highest probability level larger than 53,77% occurs on September and the lowest 34.14% occurs on July. It means that the lowest error of weather predict will occurs on September and largest on July. Then at the maximum level of monthly rainy event the highest probability level larger than 4.97% occurs on November. Differences probability level of Pmean on rainy event in each month may caused by different coefficients of variant. If coefficients of variants is ever larger and the average of rainy event number ever lower. So the probability level of Pmean will be going to ever lower.

Description the probability range of rainy event number since 1991 to 2009 as follows.

Table 1. Description the probability range of rainy event number.

Month i ii iii iv v vi vii

January Min 1 51.8% 9.65 -0.36 2.12 98.4%

Mean 9 51.8% 9.65 -0.36 2.12 39.2%

Max 20 51.8% 9.65 -0.36 2.12 0.8%

February Min 1 41.2% 9.04 -2.48 3.90 97.6%

Mean 6 41.2% 9.04 -2.48 3.90 45.9%

Max 10 41.2% 9.04 -2.48 3.90 3.0%

March Min 5 36.6% 9.95 13.24 3.64 99.2%

Mean 10 36.6% 9.95 13.24 3.64 41.3%

Max 20 36.6% 9.95 13.24 3.64 1.6%

April Min 6 27.9% 4.46 5.80 1.49 99.1%

Mean 10 27.9% 4.46 5.80 1.49 40.1%

Max 16 27.9% 4.46 5.80 1.49 3.2%

May Min 1 41.7% 5.58 2.67 1.60 98.9%

Mean 8 41.7% 5.58 2.67 1.60 39.5%

Max 16 41.7% 5.58 2.67 1.60 2.8%

June Min 2 33.1% 46.95 -39.34 26.18 96.5%

Mean 7 33.1% 46.95 -39.34 26.18 49.1%

Max 10 33.1% 46.95 -39.34 26.18 2.5%

July Min 1 52.7% 5.91 1.78 1.45 99.2%

Mean 8 52.7% 5.91 1.78 1.45 34.1%

Max 19 52.7% 5.91 1.78 1.45 0.9%

August Min 3 50.2% 5.13 2.83 1.26 98.6%

Mean 8 50.2% 5.13 2.83 1.26 36.4% Max 18 50.2% 5.13 2.83 1.26 2.0%

September Min 3 34.1% 9.17 1.30 2.73 99.0%

Mean 10 34.1% 9.17 1.30 2.73 53.8%

Max 17 34.1% 9.17 1.30 2.73 1.3%

October Min 2 49.3% 10.69 0.46 2.02 98.0%

Mean 11 49.3% 10.69 0.46 2.02 37.9%

Max 21 49.3% 10.69 0.46 2.02 2.4%

November Min 4 36.0% 14.77 -0.14 3.07 98.0%

Mean 14 36.0% 14.77 -0.14 3.07 41.7%

Max 21 36.0% 14.77 -0.14 3.07 5.0%

December Min 5 31.3% 9.59 3.54 2.41 98.9%

Mean 13 31.3% 9.59 3.54 2.41 38.0%

Max 20 31.3% 9.59 3.54 2.41 2.6%

i. rate iv. scale parameter vii. probabilty ii. rainy event v. location parameter iii. coefficient of variant vii. shape parameter

C.1.2. Goodness Test of Fit

Based on the result of probability obtained, be carried out Kolmogorov Smirnov test using the ratio between of rainy events number by the total number that occur, then determined the maximum difference (DN) between observed cumulative distribution function (SN(X)) and cumulative distribution function theoretical (F *(X)).

Table 2. Kolmogorov Smirnov test

Month | DN | P-value α (alpha)

January 0.132 0,301 0.404 0,05

February 0.129 0,301 0.906 0,05 March 0.153 0,301 0.612 0,05 April 0.127 0,301 0.917 0,05 May 0.119 0,301 0.948 0,05 June 0.182 0,301 0.486 0,05 July 0.095 0,301 0.976 0,05 August 0.112 0,301 0.907 0,05 September 0.112 0,301 0.969 0,05 October 0.109 0,301 0.802 0,05 November 0.077 0,301 0.991 0,05 December 0.161 0,301 0.702 0,05

α = 0 ,0 5N = 1 9D

Page 126: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

119

Figure 4. Fitted The three parameter of Weibull Distribution and Weibull Distribution Plot.

From the Kolmogorov-Smirnov test in table (2) and figure (4) above, can known that | DN | of the Weibull Distribution for the parameter distribution show the maximum difference is lower than α=0,05

N=19D = 0,301. It means the data distribution of rainy events number on each month since 1991 to 2009 was carried out representative result. P-value of Kolmogorov-Smirnov which has been generated from each distribution larger than α = 0,05. So, can be assumed that the probability distribution of each month has a representative and symmetric nature of the data.

C.1.3. Difference Decrease of P75 and Pmean

Table 3. Differences decrease of P75 and Pmean

Month P Mean P75 Standard Deviation

Coefficients of Variant

Januari 9 5 4,24 51,83% Februari 6 4 2,35 41,16% Maret 10 7 3,64 36,59% April 10 7 2,75 27,95% Mei 8 5 3,20 41,66% Juni 7 5 2,19 33,06% Juli 8 4 3,77 52,73% Agustus 8 4 3,82 50,24% September 10 7 4,23 34,10% Oktober 11 6 4,90 49,31% November 14 10 4,71 36,03% Desember 13 8 3,77 31,27%

Based on this comparison result, on March, April, and September has similar characteristics to the potentiallity of rainy event. This similar phenomenon also occurs on May and July.

C.1.4. Correlation between P75 and Pmean

Table 4. Correlation between P75 and Pmean

Regression Equation Sig-F Sig-t Coefficients of correlation

P75 = 0,7213* Pmean - 0,8525 0,000 0,000 0,914

The Strength correlation between of P75 and Pmean is 0,914. So the regression equation above can be used to facilitate using the three parameter Weibull Distribution with probability level larger than 75%.

D. CONCLUSION

According to the simulation results probability of rainy event number with the three parameter Weibull Distribution, can be conclude that:

1. The three parameter Weibull Distribution can be used to determine the probability of rainy event number

2. The three parameter Weibull Distribution has a symmetric pattern on density function of the data.

3. High Potential of rainfall occurs during October to December, and the low potential of rainfall occurs on February and June.

4. At the minimum level of rainy event, the highest probability level larger than 99,06% occurs on April, at the average level the highest probability level larger than 53,77% occurs on September and the lowest 34.14% on July. It means that the lowest error of weather predict will occurs on September and largest on July. Then at the maximum level of monthly rainy event the highest probability level larger than 4.97% occurs on November.

5. Comparison results for P75 and Pmean, on March, April, and September has similar characteristics to the potentiallity of rainy event. This similar phenomenon also occurs on May and July. The Strength correlation between of P75 and Pmean is 0,914.

E. REFFERENCES

Bain, L.J and M. Engelhardt. 1991. Introduction to Probability and Mathematical Statistics. California: Duxbury Press.

Casella, G. and Berger, R.L. 1990. Statistical Inference. California. Brooks/Cle

Fitted Weibull Distribution

0.1 1 10 100

0

10

20

30

40

perc

enta

geWeibull Plot

0.1 1 10 1000.1

0.51

510203050709099

99.9

cum

ulat

ive

perc

ent

Page 127: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Indonesia 2010

120

Publishing Company. in Enginering and management Science. John Willey & Sons. Inc.

Cohen, A. C., Whitten, B. and Ding, Y. (1984). Modified Moment Estimation for the Three-Parameter Weibull Distribution. Journal of Quality Technology 16, pp. 159-167.

Haan,C.T. 1977. Statisticsl Methodes in Hydrology. The Lowa State University Press. Ames. Lowa. 378P.

Hines, W.W and Montgomery, D.C. 1996. Probability and Statistics in Enginering and management Science. John Willey & Sons. Inc.

Kao, J. H. K. (1959). A Graphical Estimation of Mixed Weibull parameters in Life testing of Electron Tubes. Technometrics 1, pp. 389-407.

Mann, N. and Fertig, K. (1975). A Goodness-of-Fit-Test for the two-Parameters vs. Three-Parameter weibull; confidence bounds for threshold. Technometrics 17, pp. 237-245.

Sellers, W.D. 1965. Physical Climatology. University of Toronto Press, 272 pp.

Weibull, W. (1951). A Statistical Distribution Function of Wide Applicability. Journal of Applied Mechanics 18, pp. 294-298

Page 128: Proc.CIAM2010[1]
Page 129: Proc.CIAM2010[1]
Page 130: Proc.CIAM2010[1]
Page 131: Proc.CIAM2010[1]
Page 132: Proc.CIAM2010[1]
Page 133: Proc.CIAM2010[1]
Page 134: Proc.CIAM2010[1]
Page 135: Proc.CIAM2010[1]
Page 136: Proc.CIAM2010[1]
Page 137: Proc.CIAM2010[1]
Page 138: Proc.CIAM2010[1]
Page 139: Proc.CIAM2010[1]
Page 140: Proc.CIAM2010[1]
Page 141: Proc.CIAM2010[1]

Proceeding

ABSTRAIn this paparea of E(Global Pdiscrete GPlongitude. must be dimensionaare obtainquantified.manipulateby using thand then tpolygon byalgorithm i KEYWODiscrete GConformal I.INTROArea is dimensiontypically For certaParallelogsimple to UnfortunaEarth's suthe latitudactually cpoints selongitude latitude. FEarth is afrom the distance equator) aarea of Eathe earthcalculatio The propoavailabilitthirty satSystem (G

g of Conf. on In

(1

(2)

ACT er, it is present

Earth’s surfaceositioning SysPS data take tTo utilize the projected on

al view. Oncened, points, lin. The objectie discrete GPShe Lambert Coto determine y using Greenis implemented

ORDS GPS data; Eartl Conic Project

ODUCTIONa quantity

nal size of a da region bou

ain closed cugram, and R

calculate areately, the arurface is comde and longicurves and thparated by tvaries accord

Furthermore, a little flat atcenter to a pfrom center

and is a little arth’s surface’s shape andn algorithm.

osed approacty of microwtellites of tGPS). The m

ndustrial and A

CalculBa

Alexand1)Mathematics

Computer Eng(E

ted the methode based on dstem) data. Tthe format of GPS data, the

nto a flat pe the projectednes and polygive of the p

S data to definonformal Conithe projected

n’s Theorem. Td in software.

th surface; Artion; Green’s T

expressing defined part onded by a clurves, such Regular polyea by definiterea calculati

mplicated by thitude line se

hat the distanthe amount ding to the co

Earth is not the poles (tpole is shortr to anywherough. Thus

e, first we mud then deriv

ch here is to wave signalsthe Global monitoring of

Appl. Math., B

ating Areaased on Dis

der A S GunawDepartment, B(Email: aagun

gineering Lab, Email: aripin_8

d to calculate discrete GPS The obtained

latitude, and e coordinates plane or 2 d coordinates gons can be paper is to ne a polygon c Projection, area of the

The resulted

rea; Lambert Theorem.

the two-of a surface, osed curve. as Square,

ygon, it is e formulae. on on the he fact that gments are

nce between degrees of

osine of the ot a sphere. the distance ter than the ere on the to calculate

ust consider ve the area

exploit the s from the Positioning f GPS data

andung-Indone

134

a of Earthscrete GPS

wan (1), Aripin

Binus [email protected] [email protected]

will alongituconsiddata oa flat paper, Projeccoordiprojecpolygocalculabased surface II. ARLambA Lam[4] is projecsphereparalleminimdimensurfacestandafurther

FigAs thprojecmappi To con80, or a geodglobalellipsosize (v

esia 2010

h’s SurfaceS Data

n Iskandar (2)

ty, Jakarta, Indu (1)) sity, Jakarta, Ino.id (2))

allow the detude on earthdering the eartf earth’s surfplane or 2

the Lamction is used inates, and ted coordinons can be quate the projeon the disc

e by using Gr

REA CALCUbert Conformmbert conforma conic map tion superim

e of the Eels to the glo

mizes distortiosional surfae. There is

ard parallels,r from the cho

gure 1. Lambert he name intion are cong which pre

nsider the eaGeodetic Refdetic referen reference

oid, customarvolume) as th

e

donesia

ndonesia

tection of thh’s surface ath’s shape, thface must be dimensional

mbert Confto determinebased on

nates, pointsuantified. Theected area ocrete GPS dreen’s Theore

ULATION mal Conic Pro

mal conic prprojection. I

mposes a coEarth, with obe and interson from projce to a tw

no distorti, but distorosen parallels

Conformal Conndicates, maponformal, thserves angles

rth’s shape, iference Syste

nce system cellipsoid.

rily chosen tohe figure of

he latitude anas in [1]. B

he discrete GPprojected onviews. In th

formal Cone the projecte

the obtaines; lines ane next step is f the polygo

data of earthem.

ojection rojection as In essence, thone over thtwo standar

secting it. Thjecting a thre

wo-dimensionion along thrtion increases.

nic Projection ps using thhat means s.

it is used GRem 1980, that onsisting of A referenc

o be the samf the Earth,

nd By PS nto his nic ed ed nd to on h’s

in he he rd

his ee

nal he es

his a

RS is a

ce me

is

Page 142: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

135

described by its equatorial radius a and eccentricity e. For GRS 80, these are: a = 6,378,137 m; e = 0.081819191 Let λ be the longitude, λ0 the reference longitude, φ the latitude, φ0 the reference latitude, and φ1 and φ2 the standard parallels. By using this reference system, several additional parameters need to compute before transformation can be undertaken, that is:

(1)

(2)

where

(3)

(4)

(5) Note: m1 and m2 are obtained by evaluating m

(Eq. 3) using φ1 and φ2 the standard parallels.

t0, t1 and t2 are obtained by evaluating t (Eq. 4) using φ0, φ1 and φ2

0 are obtained by evaluating using t0 Then the transformation of spherical coordinates to the plane via the Lambert conformal conic projection is given by:

(6) (7)

In the application, we can choose the reference coordinate (φ0, λ0) as the average of all data and the standard parallels φ1 and φ2 as the minimum and the maximum latitude in the data. Green’s Theorem Green’s theorem is a very interesting theorem which shows a relationship between a closed area and the path surrounding this closed area.

Below are the integrals representing Green’s theorem as in [2]. On the left there is area integration (double integral) and on the right

there is line

integration (single integral).

(8)

where M and L are two functions M(x,y) and N(x,y). The area of any real region is defined as:

(9) If two functions M and L is chosen such that

1

yL

xM

; thus the Green’s Theorem

could be used to calculate area. In this paper, we choose:

(10) Therefore, the formulae for area calculation is:

(11) The line integral can be evaluated using the following discrete approximation as in [5],

(12)

In the application, area is typically a region bounded by a closed loop polygon and thus for convenience, the initial point (x0, y0) of (Eq. 12) is set equal to the end point (xn, yn) in this closed loop polygon. The sign of area calculation is depend on the orientation of closed loop polygon, that is: (+) for

00

0

cossin

nynx

ntnmF

ttmm

n

1

1

21

21

lnlnlnln

n

ee

tFa

t

em

sin1sin1

24

22

tan

sin1

cos

dyMdxLdA

yL

xM

dAA

yLandxM21

21

dxydyxdAA21

n

iiiii xyyxA

021

Page 143: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

136

counterclockwise orientation and (-) for clockwise orientation. GPS (Global Positioning System) The Global Positioning System (GPS) as in [3] is a space-based global navigation satellite system that provides reliable location and time information in all weather and at all times and anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. It is maintained by the United States government and is freely accessible by anyone with a GPS receiver. A GPS receiver calculates its position by precisely timing the signals sent by the GPS satellites high above the Earth. The research as basis of this paper is used a special GPS receiver that is EM-406 from GlobalSat. GPS Data Format of this receiver for both latitude and longitude is "degrees, minutes, and decimal minutes" format—ddmm.mmmm. However, the Lambert conformal conic projection requires longitude and latitude to be expressed in decimal degrees (dd.dddddd) with a corresponding sign (negative for south latitude and west longitude). Hence, we need to perform some conversions.

Figure 2. GPS Receiver

The following procedure converts the ddmm.mmmm format to decimal degrees format: 1. For example, we have "degrees,

minutes, and decimal minutes" format —ddmm.mmmm that is (-73° 59’ 14.64")

2. Divide the decimal minutes by 60 (14.64 / 60 = 0.244)

3. Sum the resulting to the minute and divide by 60 (59.244 / 60 = 0.9874)

4. The resulting is the decimal value of degrees (0.9874) and sum the degrees to the decimal using the symbols for degrees (°) (-73.9874°)

In the software implementation, the above conversion procedure must be done to convert all discrete GPS data to decimal degrees (dd.dddddd) format. Software Implementation Software Implementation is the final, and most involved, step in the research. We choose Visual Basic as programming language, and two data format that is MMC (Microsoft Media Catalog) file and Excel file. The software which called as “arealator” is developed based on the Lambert conformal conic projection and the Green’s Theorem for earth’s area calculation. The discrete GPS data will be first converted to decimal degrees (dd.dddddd) format before calculating earth’s area. The below figure is the GUI (Graphics User Interface) of the software:

Figure 3. Arealator GUI

Furthermore, the software is linked to Google map via www.gpsvisualizer.com, hence we could see the actual location of the GPS data where the area is calculated.

Page 144: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

137

Figure 4. Google Map

III. CONCLUSIONS There are advantage and disadvantage of our approach in solving the earth’s area calculation problem by projecting the latitude and longitude data (ignoring elevation) onto 2 dimensional views. The advantage is, of course, that calculus can be used to derive the algorithm of area calculation. Nevertheless, the main disadvantage of the approach is that it does not count the impact of elevation data to the area calculation. Therefore, the approach that proposed in the paper is not suitable applying to measure area in

mountainous regions. This limitation will be the first issue in our next step research. REFERENCES [1] C.G. Carlson, “What do latitude and

longitude readings from a DGPS receiver mean?”, South Dakota State University, Brookings, 1999.

[2] D. Varberg, E.J. Purcell, “Calculus”, 9th

ed., Prentice Hall, 2006. [3] J-M. Zogg, “GPS Basics: Introduction to

the System”, U-blox ag, Thalwil, Switzerland, 2002.

[4] O.S. Adams, “General Theory of The

Lambert Conformal Conic Projection”, Washington Goverment Printing Office, 1918.

[5] S.C. Chapra, R.P. Canale, “Numerical

Methods for Engineers”, 5th ed., McGraw Hill, 2006.

Page 145: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

138

Study on Application of Machine Vision using Least-Mean-Square (LMS)

Hendro Nurhadi(1) and Irhamah(2) (1)Department of Mechanical Eng., Institut Teknologi Sepuluh Nopember (ITS), Surabaya, East Java,

INDONESIA (Email: [email protected])

(2)Department of Statistics, Institut Teknologi Sepuluh Nopember (ITS), Surabaya, East Java, INDONESIA (Email: [email protected])

ABSTRACT In recent years, implementations of image processing for Machine Vision using LMS (Least-Mean-Square), PCA (Principal-Component-Analysis) and SVD (Singular-Value-Decomposition) in industrial applications are widely performed. This study discusses the significant differentiation of linear function due to an irregularity in one of the data. This study focuses in three cases: video enhancement using content-adaptive LMS filters; machine vision system for the automatic identification of robot kinematics parameters; and visual servo auto-focusing using recursive weighted least-squares for machine vision inspection. In conclusion, LMS is a robust tool that can be used to be an appropriate approach for some applications in machine vision. KEYWORDS Machine vision; least square; component analysis; value decomposition; image processing; video enhancement; identification. I.INTRODUCTION LMS, PCA and SVD are common tools which used to solve many problems for applications in Machine Vision. It introduces that the linear function differs significantly due to an irregularity in one of the data. LMS approach also couldn’t obtain proper results if there is a miss in the experiment. In this study, we’ll learn and see how reliable is this LMS method for such Machine Vision’s applications. We’ll discuss them in according to three cases: Video Enhancement Using Content-adaptive Least Mean Square Filters [1], Machine Vision System for the Automatic Identification of Robot Kinematics Parameters [2], and Visual Servo Auto-focusing using Recursive Weighted Least-Squares for Machine Vision Inspection [3]. Using one of user friendly method such as LMS (Least-Mean-Square), PCA (Principal-Component-Analysis) and SVD (Singular-Value-Decomposition) in application of Machine Vision is the main goal of writing this study of papers. In this study report, it will be focused

on using LMS in Machine Vision’s applications. II. Video Enhancement Using Content-adaptive Least Mean Square Filters The current television system is evolving towards an entirely digital, high resolution and high picture rate broadcasting system. As compatibility with the past weights heavily for a popular consumer service, this evolution progresses rather slowly, [4]–[5]. The progress is particularly slow compared to the revolution we observe in modern display technology, which in the last decade eliminated the traditional bottleneck in television picture quality. Methods, that can bridge the resulting gap between broadcast and display picture quality, are therefore increasingly important and may profit from the rapid developments in semiconductor technology. This last trend enables a steadily increasing processing power at a given price level, and makes the implementation of complex video enhancement algorithms feasible in consumer equipment, [6]–[7]. We started by reviewing image and video resolution enhancement techniques proposed in scientific papers and patent literature. This review revealed two interesting approaches, both derived from image restoration theory, i.e. the Least Mean Square (LMS) or Wiener filtering approach, and the Bayesian image restoration approach, [8]–[9]. We focused on the LMS-filtering methods as particularly Kondo’s classification based LMS filtering approach for video resolution up-conversion method attracted our special attention due to its high performance and relatively simple hardware implementation. Furthermore we showed that although the LMS filtering approaches are based on objective metrics, particularly the MSE criterion, it is still possible, for video up-scaling, to design subjectively more pleasing filters using the LMS metric.

Page 146: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

139

We also concluded that a direct mapping of the content-adaptive LMS filtering to the problem of intra-field de-interlacing does not outperform all existing intra-field methods. However, the classification based LMS filtering approach was shown to successfully replace the heuristics in two other de-interlacing designs, including the vertical temporal filtering and the data mixing for hybrid deinterlacing combining motion-compensated and edge-adaptive techniques.

Figure 1. The training process according to Kondo [1] The SD (standard definition) video signal is a down-

scaled version of the HD (high definition) signal. Sample pairs are extracted from the training material

and classification of samples is done by ADRC (Adaptive Dynamic Range Coding). Using LMS

(Least Mean Square) algorithm within every class, optimal coefficients are figured out and stored in a

Look-Up Table (LUT). Finally, we investigated the classification based LMS filtering approach for chrominance resolution enhancement and coding-artifact reduction. In the chrominance up-conversion problem, we combined luminance and chrominance information in an innovative classification of the local image patterns leading to an improved performance compared to earlier designs. In coding-artifact reduction, the relative position information of the pixel inside the coding block along with the local image structure for classification was shown to give an interesting performance and elegant optimization. This reveals a more general usage of LMS filtering for video enhancement, which can avoid many heuristic optimizations that are commonly seen in this area. The classification-based LMS filtering algorithm is described in a stack of about 50 patents invented by Kondo and owned by Sony. We explored the options for classification-based LMS filtering applied in de-blocking of severely compressed digital video.

When encoding each pixel into 1-bit, , with ADRC:

0.5SD MIN

MAX MIN

F FQF F

(1)

Here is the luminance value of the SD pixel. and are the maximum and minimum luminance values of the pixels in the classification aperture respectively. . is the floor operator. Let be the luminance value of the original (not the up-converted) HD pixels and be the value of the interpolated ones, which is a weighted sum of the nine SD pixels in the interpolation window. The equation to interpolate pixels on position A is:

2 2

,0 0

2 2 , 2 2

2 2 1, 2 2 1

HI

kl c SDk l

F i j

w F i k j l

(2) where wkl,c are weights for class c. Suppose one class contains in total a number of t samples in the training process, then the error of the pth interpolation sample is:

, , ,

, ,

2 2

, ,0 0

2 2 1,2 2 1

1,2, ,

p c HD p HI p

p c HD p

kl c SD pk l

e F F

e F

w F i k j l

p t

(3) Consequently, the total (squared) error of this class can be expressed as:

2 2,

1

t

c p cp

e e

(4)

To find the minimum, we calculate the first derivative of 2 with respect to each

Page 147: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

140

2,

,1, ,

, ,1

2

2 2 2 1, 2 2 1

0,1, 2; 0,1, 2

tp cc

p cpkl c kl c

t

SD p p cp

ee ew w

F i k j l e

k l

(5) The minimum occurs when the first derivative is zero, which leads to the following equation for each class:

00,00 00,01 00,22 00, 0

10,00 10,01 10,22 01, 1

22,00 22,01 22,22 22, 8

c

c

c

X X X w YX X X w Y

X X X w Y

(6) The coefficients wkl,c can be obtained by solving above equation for each class. Here,

, ,1

,

2 2 1,2 2 1

2 2 1,2 2 1

, 0,1,2; , 0,1,2

t

kl qr SD pp

SD p

X F i k j l

F i q j r

k q l r

(7) and

3 ,1

,

2 2 1,2 2 1

2 2 1,2 2

0,1,2; 0,1,2

t

k l SD pp

HD p

Y F i k j l

F i j

k l

(8) The classification-based global LMS optimization with 3 × 3 aperture gives the better performance/price ratio compared to other LMS filtering based up-conversion methods. Off-line LMS optimization improves the computational efficiency and provides additional flexibility. The classification-based global LMS optimization can be used to build classification-based sharpening filters, i.e. classification-based subjectively optimal LMS filter. Image quality remains a subjective issue and no objective measure can reliably reflect

the subjective impression. However, the LMS-based filter design for image up-conversion based on the objective MSE (mean-square-error) metric can effectively avoid heuristic approaches in the optimal filter design.

Figure 2. An image portion from up-converted Bicycle using: Original image, (A) Standard TK

(Kondo) method, (B) JT method, (C) Standard TK method with classification-based sharpness

enhancement, (D) Classification-based subjective optimal up-conversion using 3 × 3 aperture, (E)

Classification-based subjective optimal up-conversion using the diamond shape aperture, (F) Subjective

optimal up-conversion based on 3 × 3 aperture using localized approach.

Page 148: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

141

Figure 3. An image portion from up-converted Siena using: Original image, (A) Standard TK method, (B)

JT method, (C) Standard TK method with classification-based sharpness enhancement, (D)

Classification-based subjective optimal up-conversion using 3 × 3 aperture, (E) Classification-based

subjective optimal up-conversion using diamond shape aperture, (F) Subjective optimal up-conversion

based on 3 × 3 aperture using localized approach. From this overview, Kondo’s classification based LMS filtering approach for video resolution up-conversion attracted our attention due to its high performance and relatively simple hardware implementation. It is also recognized that the content-adaptive LMS filtering, especially Kondo’s classification-based LMS filtering, which outperforms Li’s localized LMS-filtering, can be used for a broad area of video enhancement applications. Examples of applications elaborated in this thesis include resolution up-conversion, de-interlacing, chrominance resolution up-conversion and coding artifact reduction. Furthermore, those were showed that although the LMS filtering approaches are based on

objective metrics, particularly the MSE criterion, it is still possible, for video up-scaling, to design subjectively more pleasing filters using the LMS metric. As a conclusion, a direct mapping of the content-adaptive LMS filtering to the problem of intra-field de-interlacing does not outperform all existing intra-field methods. However, the classification-based LMS filtering approach was shown to successfully replace the heuristics in two other de-interlacing designs, including vertical temporal filtering and data mixing for hybrid de-interlacing combining motion-compensated and edge-adaptive techniques. III. Machine Vision System for the

Automatic Identification of Robot Kinematics Parameters

This paper presents an efficient, noncontact measurement technique for the automatic identification of the real kinematics parameters of an industrial robot. The technique is based on least-squares analysis and on the Hayati and Mirmirani kinematics modeling convention for closed kinematics chains. The measurement system consists of a single camera mounted on the robot’s wrist. The camera measures position and orientation of a passive target in six degrees of freedom.

Figure 4. Kinematic configuration of the system [2]

Target position is evaluated by applying least-squares analysis on an over determined system of equations based on the quaternion representation of the finite rotation formula. To enhance the accuracy of the measurement, a variety of image processing functions including sub-pixel interpolation are applied. The unknown kinematics parameters are identified by solving

1 2 3 4M M M M I (9) or

Page 149: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

142

1 1 11 4 3 2 0iF M M M M (10) The machine vision system is used to determine the position and orientation of the target with respect to the camera. The camera used is a high-resolution charge-coupled-device (CCD)-type camera.

Figure 5. Hayati and Mirmirani kinematic model

The target is passive in nature implying that the only light emanating from the target is ambient reflected light. Ideally, the target is composed of a 3-D array of precision spheres accurately positioned in three-dimensional space. To facilitate the identification of each sphere within the image, one of the spheres is intentionally oversized. In order to derive three-dimensional information from a single (2-D) perspective view, the coordinates of the centers of the spheres must be known, a priori, relative to the target reference frame. Although it is not mathematically required, it is preferable from an accuracy point of view if the centers of the spheres do not all lie on the same plane. This introduces a “perspective effect” which accentuates the visual consequences of small rotations about axes parallel to the camera plane. The position of the target with respect to the camera is determined by analyzing a single clear image of the target. The target is composed of an array of spheres, which appear as circles when projected onto the 2-D image sensor. Various image-processing algorithms are applied in order to accurately determine the center of each of the circles in the image. A Sobel filter is applied to isolate the edges of the image, i.e., those areas with an elevated spatial gradient. Further image segmentation (i.e., thresholding) yields a binary image containing only edge information. A contour tracking algorithm is then applied to the binary image in order to identify the contours of the objects within the image. A number of form

characteristics including area and centroid are simultaneously evaluated. The form characteristics are used to resolve the correspondence problem, i.e., relating the 2-D objects in the image to the three-dimensional spheres in the target. Following this, the maximum gradient pixels P(i,j) are identified for each contour in the image. For each pixel P(i,j), a sub-pixel interpolation technique is applied to determine the exact position of the maximum gradient with sub-pixel accuracy. A detailed description of this technique is presented in. To summarize the approach, the gradient values G(P(i+m,j+n)) of a 5x5 kernel around pixel P(i,j) are used for the sub-pixel interpolation. The Gauss function S(x,y) represented by

2 22 2 1 2

,2 2

,x m y n

i m j nm m

S x y G P e

(11)

must then be maximized with respect to the x and y coordinates measured from the center of the P(i,j) pixel. The sub-pixel interpolation technique generates a set of n points with coordinates (xi,yi) which accurately describe the contour of each object. The objects being circular, the equation of a circle 2 2 2, ,i c c c i c i c cF a b r x a y b r

(12) combined with a least-squares technique allow the identification of the center coordinates (ac,bc) as well as the radius rc of each circle corresponding to one sphere of the target. IV. Visual Servo Auto-focusing using

Recursive Weighted Least-Squares for Machine Vision Inspection

Auto-focusing of an electronic camera is an important task in automated machine vision applications. It helps to obtain clear images for accurate inspection. For example, in electronic assembly industry, the camera focus needs to be constantly adjusted according to the size of different components to obtain sharp images so that accurate data could be extracted from a pool of information available without much processing power.

Page 150: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

143

In this paper, we propose a visual servo auto-focusing mechanism using a moments based focus measure. A recursive weighted least-squares focusing algorithm is developed to drive a camera to the focused position. It is shown that the proposed auto-focusing scheme can guide the camera to the focused position in a fast speed regardless of the initial position and the object to be focused.

Figure 6. Flowchart of proposed visual servo auto-

focusing [3]

Now we develop the recursive algorithm to estimate the quadratic curve and the camera focused positions zi. Considering the noise pollution in practice, the obtained focus measure is added with a Gaussian zero-mean random noise n. Thus, we have

iiTii nxH (13)

where

T

iii zz

H

111

2

and

Tiiii cbax . It is desired to obtain an estimate of xi recursively when additional information is available in the form of new moments feature. To do this, both moments feature extracted from two consecutive camera locations, i-1 and i, are partitioned and its moments estimation error cost function is minimized.

Hence, the RWLS estimation of quadratic curve parameters is obtained to be

iiiiii

iiii xH

RHPHHP

xx

1

111 (14)

and

RHPHPHHP

PPii

Ti

iTiii

ii

1

111 (15)

where R is a constant weight assigned to the RWLS and Pi is covariance matrix of ni. A visual servo mechanism for auto-focusing is presented. The approach involves moments feature extraction, quadratic curve parameters estimation using recursive weighted least squares, profile construction, searching for minimum point of the estimated quadratic curve and moving the camera toward the focused position. The proposed scheme does not require any prior knowledge about the assembly process and the object to he focused on, it is a robust auto-focusing scheme suitable for wide range of application. III. CONCLUSIONS Content-adaptive Least Mean Square (LMS) filters, i.e. content-adaptive filters designed via linear regression have been investigated in this thesis for applications in video and image enhancement. Our interest in these filters was triggered when creating an overview of state of the art video up-scaling and resolution up-conversion techniques using both objective and subjective evaluation. From this overview, Kondo’s classification based LMS filtering approach for video resolution up-conversion attracted our attention due to its high performance and relatively simple hardware implementation. Li’s localized LMS filtering approach for resolution up-conversion was investigated in a comparative study as an alternative. We recognized that the content-adaptive LMS filtering, especially Kondo’s classification-based LMS filtering, which outperforms Li’s localized LMS-filtering, can be used for a broad area of video enhancement applications. Examples of applications elaborated in this thesis include resolution up-conversion, de-interlacing, chrominance resolution up-conversion and coding artefact reduction.

Page 151: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

144

Finally, we investigated the classification-based LMS filtering approach for chrominance resolution enhancement and coding-artefact reduction. In the chrominance up-conversion problem, we combined luminance and chrominance information in an innovative classification of the local image patterns leading to an improved performance compared to earlier designs. In coding-artefact reduction, the relative position information of the pixel inside the coding block, along with the local image structure for classification, was shown to give interesting performance and elegant optimisation. REFERENCES [1] Meng Zhao, “Video Enhancement Using

Content-adaptive Least Mean Square Filters”, PhD Dissertation, Technische Universiteit Eindhoven 2006. ISBN 90-386-0774-1 - ISBN 978-90-386-0774-0.

[2] Patrick Rousseau, Alain Desrochers, and Nicholas Krouglicof, “Machine Vision System for the Automatic Identification of Robot Kinematic Parameters”, IEEE Transactions on Robotics and Automation, Vol.17, No.6, Dec.2001, pp.972-978.

[3] Chee Hwa Chin, Ying Zhang, Changyun Wen, “Visual Servo Auto-focusing using Recursive Weighted Least-Squares for Machine Vision Inspection”, IEEE Electronics Packaging Technology, 2003

5th Conference (EPTC 2003) 10-12 Dec. 2003, pp.777-780.

[4] J. Whitaker, Standard Handbook of Video and Television Engineering, 4th Edition, ISBN: 0071411801, McGraw-Hill companies, 2003.

[5] H. Kaneko and T. Ishiguro, “Digital television transmission using bandwidth compression techniques”, IEEE Communications Magazine, Vol. 18, No. 4, pp. 14-22, Jul. 1980.

[6] U. Reimers, Digital video broadcasting – the international standard for digital television, Springer-Verlag Berlin Heidelberg New York, ISBN: 3-540-60946-6, 2001.

[7] U. Reimers, “Digital video broadcasting”, IEEE Communications Magazine, Vol. 36, No. 6, pp. 104-110, Jun. 1998.

[8] T. Sikora, “MPEG Digital Video-Coding Standards”, IEEE Signal Processing Magazine, Vol. 14, No. 5, pp. 82-100, Sep. 1997.

[9] P. Gastaldo, S. Rovetta, R. Zunino, “Objective quality assessment of MPEG-2 video streams by using CBP neural networks”, IEEE Transactions on Neural Networks, Vol. 13, No. 4, pp. 939-947, Jul. 2002.

Page 152: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

145

Cooperative Linear Quadratic Game for Descriptor System

Salmah Department of Mathematics, Gadjah Mada University, Yogyakarta, DI Yogyakarta, Indonesia

(Email: [email protected])

ABSTRACT In this paper the cooperative linear quadratic game problem will be considered. In the game, either there is one individual who has multiple objectives or there are a number of players all facing linear quadratic control problem, will cooperate together to obtain optimal result. For ordinary system case the set of Pareto efficient equilibria can be determined for these games. In this paper the result will be generalized for descriptor system. KEYWORDS Dynamic, game, cooperative, descriptor, system. I. INTRODUCTION

Dynamic game theory brings three keys to many situations in economy, ecology, and elsewhere: optimizing behavior, multiple agents presence, and decisions consequences. Therefore this theory has been used to study various policy problems especially in macro-economic. In applications one often encounters systems described by differential equations system subject to algebraic constraints. The descriptor systems, gives a realistic model for this systems. In policy coordination problems, questions arise, are policies coordinated and which information do the parties have. One scenario is cooperative open-loop game. According this, the parties can not react to each other’s policies, and the only information that the players know is the model structure and initial state. II. PRELIMINARIES In this paper we will consider a linear open-loop dynamic game in which the player satisfy a linear descriptor system and minimize quadratic objective function. For finite horizon problem, solution of generalized Riccati differential equation is studied. If the planning horizon is extended to infinity the differential Riccati equation will become an algebraic Riccati equation. Formally, the players are assumed to minimize the performance criteria

,)()()()(21),...,,(

021 dttuRtutxQtxuuuJ

T

iiTii

Tni

(1)

with all matrices are constant, Tii QQ , and

iR positive definite. The inclusion of player j’s control effort into player I’s cost function is dropped, due to the open-loop information structure, this term drops out in the analysis. The players give control vector to the system

)()()()( 2211 tuBtuBtAxtxE ,

0)0( xx (2)

with 1)()()( ,, xmrni

rnxrn BAE , x(t) descriptor vector n+r dimension. While ui(t), i=1,2 are control vector im dimension which is done by i-th player, i=1,2. Matrix E generally singular with rank .nE The initial vector 0x is assumed to be a consistent initial state. We recall the following result from theory of descriptor (see [7]) system for the differential algebraic equation

0)0(),()( xxtftAxxE (DAE)

And the associated pencil

AE (3) System (DAE) is called regular if the characteristic polynomial )det( AE is not identically zero. If the system (DAE) is not regular then any consistent initial conditions do not uniquely determine solutions (see [13]). If the system (DAE) is regular, the roots of characteristic polynomial are the finite eigenvalues of the pencil. If E is singular the pencil (3) is said to have infinite eigenvalues which is the zero eigenvalues of the inverse

Page 153: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

146

pencil AE . In the next discussion we recall the Weierstrass canonical form. Theorem 1 If (3) is regular, then there exist nonsingular matrices X and Y such that

00and

00nT T

r

JIY EX Y AX

IN

(4) where J is a matrix in Jordan form whose elements are the finite eigenvalues,

k kkI R is the identity matrix and N is a

nilpotent matrix also in Jordan form. J and N are unique up to permutation of Jordan blocks. If (3) is regular the solutions of (DAE) take the form 1 1 2 2( ) ( ) ( )x t X z t X z t where with 1 2[ ]X X X 1 2[ ]T TY Y Y ,

( )1 1

T n r nX Y R , ( )2 2

T n r rX Y R and

( ) 11 1 1 1 00

1

2 20

( ) (0) ( ) (0) [ 0]

( ) ( )

tJt J t sn

iki

ii

z t e z e Y f s ds z I X x

dz t NY f tdt

under the consistency condition:

1

10 2

0

[0 ] (0)ik

ir i

i

dI X x N Y fdt

Here k is the degree of nilpotency of N . That is the integer k for which 0kN and 1 0kN The index of the pencil (3) and of the descriptor system (DAE) is the degree k of nilpotency of N . If E is nonsingular, we define the index to be zero. From the above formulae then the solution

( )x t will not contain derivatives of the function f if and only if 1k . In that case the solution ( )x t is called impulse free. In general, the solution ( )x t involves derivatives of order 1k of the forcing function f if 3 has index k .

Next, let [ ]VW be an orthogonal matrix such that image V equals the null space (

( )TN E ) of TE and image W equals the null space of E . Then

1 1[ 0][ ]T TE E VW EV , where 1E is full column rank. The next lemma characterizes pencils which have an index of at most one. Lemma 1 The following statements are equivalent: (i) pencil (4) is regular and has at most index one.

(ii)rank T

En r

V A

( ( ))Trank E VV A . (iii) rank ([ ])EAW n r

( ( ))Trank E AWW . Since we do not want to consider derivatives of the input function in this paper, we restrict the analysis to regular index one systems here. The above discussion motivates then the next assumptions. Assumption 1 Throughout this section the next assumptions are made w.r.t. system (1):

1. matrix E is singular; 2. det ( ) 0E A ; 3. rank ([ ]EAW n r .

In the cooperative game, each player have a set of possible outcomes then one outcome (which in general does not coincide with a player’s overall lowest cost) is cooperatively selected. It is reasonable that the cooperative desicion is strategy that have the property that if different strategy is chosen, then at least one of the players has higher costs. Or, stated diferently, is solutions that the players outcome can not be improved by all players simultaneously. This strategy is called Pareto efficiency.

Page 154: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

147

Definition 1 Let U is the set of admissible strategies. A set of strategies is called Pereto efficient if the set of inequalities

),ˆ()( ii JJ i=1,2,...,N, where at least one of the inequalities is strict, does not allow for any solution U . The corresponding point

NNJJ ))ˆ(),...,ˆ(( 1

is called a Pareto solution. The set of all Pareto solutions is called Pareto frontier. Usually there is more then one Pareto solution. It is noted that here we are not make definiteness assumptions for matrix

iQ . Here we follow the line in [5] which is derive Pareto solutions for linear quadratic game with ordinary systems without definiteness assumptions for matrix iQ . While in [4] there is more simple discussion about Pareto solutions of linear quadratic game with definiteness assumptions for matrix iQ .

III. THE FINITE PLANNING HORIZON

Let we consider the game (1), (2) under the assumption that T is finite. The game (1), (2) has a set of open-loop Nash equilibrium actions 1 2( ( ) ( ))u u if and only if 1 2( ( ) ( ))u u are open-loop Nash equilibrium actions for the game

)()()(

00

)()(

000

112

1

2

1 tuBYtxtx

IJ

txtxI T

r

n

01

2

122 )0(

)0(),( xX

xx

tuBY T

(5) where player i has the quadratic cost functional:

T

iTT

ni txtx

txQtxtxuuuJ0 2

12121 )(

)()()()(),...,,(

dttuRtu )(~)( (6) From (5) it follows that

))()((0)( 22112 tuBtuBYItx Tr

))()(( 22112 tuBtuBY (7) Substitution of (7) into the cost functions (6) shows that 1 2( ( ) ( ))u u are open-loop Nash equilibrium actions for the game (1), (2) if and only if 1 2( ( ) ( ))u u are open-loop Nash equilibrium actions for the game

)()()()( 22211111 tuBYtuBYtJxtx , 0

11 0)0( xXIx n

(8) with cost functionals 1 2( )iJ u u for the players given by

)()()(

000

00

0)()()(

1

1

2212

22

210

211

tututx

BYBYI

XQX

YBYB

Itututx

ni

T

TT

TTnT

TTT

dttuRtu iiTi )()(

dttututx

Mtututx i

TTTT

)()()(

)()()(

2

1

1

0211

(9) where

1

2

i i i

Ti i i i

T Ti i i

Q V WM V R N

W N R

(10) with

Page 155: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

148

1 1 1 2 2 1

1 2 2 2 1 2 2 2 2 2

111 1 2 2 2 2 11

21 2 2 2 2 2 21

12 1 2 2 2 2 12

222 2 2 2 2 2 22

T Ti ii i

T T T Ti ii i

T T T

T T T

T T T

T T T

Q X X V X X Y BQ Q

W X X Y B N B Y X X Y BQ Q

R B Y X X Y BQ RR B Y X X Y BQ

R B Y X X Y BQ

R B Y X X Y BQ R

IV. SOLUTION OF COOPERATIVE GAME In the following discussion, the set of parameters, A, plays a crucial role.

10),...,(1

1

N

iiiN andA

. The next two lemmas give a caracterization of Pareto efficient solutions. The first lemma give identification for Pareto efficient whithout convexity assumptions, while in second lemma states how to find Pareto efficient with convexity assumptions. The proof of those lemmas can be found in [4].

Lemma 2 Let )1,0(i , with 11

N

ii .

Assume U such that

N

iii J

1

)(minargˆ .

Then is Pareto efficient. Lemma 3 Assume that the strategy space U is convex. Moreover, assume that the payoffs iJ

are convex, i=1,...,N. Then, if is Pareto efficient, there exist A , such that for all

U ,

)ˆ()(1 1

i

N

i

N

iiii JJ

.

Note that if iJ are convex, then

N

iii J

1

)(

is convex too for an arbitrary A . In [5], characterization for the space of admissible control U is convex is given.

Next we will consider conditions for the cost functions iJ in (1) are convex. From [5] we recall some preliminary result. Lemma 4 Assume that U is convex. We will consider the linear quadratic cost function

dttctutx

tptp

tutx

MtutxxuTsJ

TT

T TT

)()()(

)()(2

)()(

)()(),,,(

221

00

(11) Subject to the state dynamics

01 )(),()()()( xsxtctButAxtx , (12) Where ipu, and ic aresuch that (11) and

(12) have solution,

RVVQ

M T . Let

nx 0 . Then ),,,0( 0xuTJ is convex as a

function of u if and only if 0),,,0( 0 xvTJ for all v where

dttvtz

MtvtzxvTJT TT

)(

)()()(),,,0(

00

with 0)(),()()( sztBvtAztz . (13) Note that in the second equivalence does not depend on 0x . In particular it follows from

that if J is convex for one 0x then is convex

for all 0x . Lemma 5 Assume that U is convex. Consider the linear quadratic cost function (11) and (12). Then if ),,,0( 0xuTJ is for some 0x ,

),,,0( 0xuTJ is convex foar all 0s and for

all 0x . Corollary 1 Consider the linear quadratic cost function (11) and (12). Then

),,,( 0xuTsJ is convex for all 0s and 0x

if and only if ),,,0( 0xuTJ is convex for all

(an) 0x . Which holds if and only if

Page 156: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

149

0)0,,,0( vTJ for all v, where J is given by (13). From [5] we get the following theorem. Theorem 1 Assume that either T is finite or (A,B) is stabilizable. Then ),,,0( 0xuTJ is

convex for all (an) 0x if and only if

)0,,,0(inf vTJ exists. Note 1 In [5] we can find theorem that consider the existence of the following cost function.

dttutx

MtutxJT TT

)(

)()()(

0

with 0)0(),()()( xxtButAxtx , (14) The following Riccati equations plays important role in the next discussion.

0(,))(())((

)()()(1

TKQVtKBRVBtK

AtKtKAtKTT

T

;

(DRE)

QVKBRVKBKAKA TTT )()(0 1

. (ARE) Let 21 BBB . Combining the results of Lemmas 2 and 3 and Theorem 1 and Note 1, we can derive existence and computational algorithms for both the finite and infinite horizon game problems. We just state for infinite horizon problem. Corollary 2 Consider the coocperative game (1,2) with T , seLU ,,2 and (A,B) stabilizable. Assume (ARE), with

ii WVV and

iTi

ii

RNNR

R2

1 has a

stabilizing solution iK , for i=1,2 respectively. Then all Pareto efficient solutions are obtained by determining for all A ,

2211minarg)( JJuUu

, subject to

(12). The following theorem give procedure to calculate the Pareto efficient solution of the game with descriptor system. Theorem 2 Consider the cooperative game (9), (2) with T , seLU ,,2 and (A,B)

stabilizable. For A let

RVVQMMM

T ~~~~

)( 2211 ,

Where

2211~ QQQ ,

],[~22112211 WWVVV , and

222

2122

211

1111

~RNNR

RNNR

R TT .

Furthermore let TBRBS 1~~ . Assume that (15) below has a stabilizing solution iX for

1i , i=1,2 respectively.

0~)~(~)~( 1 QVXBRVXBXAXA TTT

. (15) Then the set of all cooperative Pareto solutions is given by

)))(()),((( 21 AuJuJ . Here

)()~(~)( 1 txVXBRtu Ts

T

t

sstAT dsscXeBR

Tcl )(~ )(1

Where sX is the stabilizing solution of (15) and, with s

Tcl XSVRBAA ~~~ 1 ,

the closed-loop system is 0)0(),()()()( xxtctSmtxAtx cl .

In case 0(.) c with these actions the corresponding cost are

000~),( xMxuxJ i

Ti , where iM~ is the

unique solution of the Lyapunov equation.

Page 157: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

150

)~(~

~~(~~

1

1

Ts

Ti

scliiTcl

VXBRI

M

RVBXIAMMA.

V. CONCLUSIONS In this paper we consider the solution of cooperative linear quadratic game for descriptor system. Using the theory of convex analysis we can get the result for finite and infinite horizon game. Basically, the result were obtained by reformulating the game as an ordinary linear quadratic differential game. Following the lines and combining the results of [5] and [7], similar conclusions can be derived. The above result cn be generalized straightforwardly for N-player game. REFFERENCES

[1] Basar, T., and Olsder, G.J., Dynamic Noncooperative Game Theory, second Edition, Academic Press, London, San Diego, 1995.

[2] Dai, L., Singular Control Systems, Springer Verlag, Berlin, 1989.

[3] Engwerda J., “On the Open-loop Nash Equilibrium in LQ-games”, Journal of Economic Dynamics and Control, Vol. 22, 1998, pp 729-762.

[4] Engwerda, J.C., LQ Dynamic Optimization and Differential Games Chichester: John Wiley & Sons, 2005, pp 229-260.

[5] Engwerda, J.C., “A Note on Cooperative Linear Quadratic Control”, CentER Discussion Paper, Tilburg University, The Netherlands, No 2007-15, 2007.

[6] Engwerda, J.C., “Necessary and Sufficient Conditions for Solving Cooperative Differential Games”, CentER Discussion Paper, Tilburg University, The Netherlands, No 2007-42, 2007.

[7] Engwerda and Salmah, “The Open-Loop Linear Quadratic Differential Game for index one Descriptor Systems”, Automatica, Vol. 45, Number 2, 2009.

[8] Katayama, T., and Minamino, K., “Linear Quadratic Regular and Spectral Factorization for Continuous Time Descriptor Systems”, Proceedings of the 31st Conference on decision and Control, Tucson, Arizona, 1992, pp 967-972.

[9] Lewis, F.L., “A survey of Linear Singular Systems”, Circuits System Signal Process, vol.5, no.1, 1986, pp 3-36.

[10] Minamino, K., “The Linear Quadratic Optimal Regulator and Spectral Factorization for Descriptor Systems”, Master Thesis, Department of Applied Mathematics and Physics, Faculty of Engineering, Kyoto University, 1992.

[11] Salmah, Bambang, S., Nababan, S.M., and Wahyuni, S., “Non-Zero-Sum Linear Quadratic Dynamic Game with Descriptor Systems”, Proceeding Asian Control Conference, Singapore, 2002, pp 1602-1607,.

[12] Salmah, “N-Player Linear Quadratic Dynamic Game for Descriptor System”, Proceeding of International Conference Mathematics and its Applications SEAMS-GMU, Gadjah Mada University, Yogyakarta, Indonesia, 2007.

Page 158: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

151

ARMA Model Identification using Genetic Algorithm (An Application to Arc Tube Low Power Demand Data)

Irhamah(1), Dedy Dwi Prastyo(2)and M. Nasrul Rohman(3)

(1), (2)Department of Statistics, Institut Teknologi Sepuluh Nopember, Surabaya, East Java, Indonesia Email: [email protected], [email protected]

ABSTRACT The most crucial steps in building ARIMA time series models are to identify and build model based on available data. One of the common identification methods is correlogram, but when the data sets have mixed ARMA effect, correlogram cannot provide clear lags to identify. Therefore, in this paper we study the use of Genetic Algorithm (GA), an effective search and Optimisation method, as identification method of ARMA models. The best model from GA is compared to the best model from correlogram. Their application in Arc Tube low power demand data show that GA is better in terms of smaller MSE. KEYWORDS Time series; ARIMA; Identification; Correlogram; Genetic Algorithm; MSE. I. INTRODUCTION

A time series is a time ordered sequence of observations. Time series occurs in a variety of fields: agriculture, business and economics, engineering, geophysics, medical studies, meteorology, etc. One of the most popular method in time series is Autoregressive Integrated Moving Average (ARIMA) developed by Box and Jenkins [1]. The stages of building ARIMA models consist of Model Identification, Model Estimation, Diagnostic Checking and Forecasting. The most crucial steps are to identify and build a model based on the available data.

One of the commonly used identification methods is correlogram which uses sample autocorrelation function (ACF) and sample partial autocorrelation function (PACF). The goal is to match patterns in the sample ACF and sample PACF with the known patterns of the theoretical ACF and PACF of ARIMA models. However, when the time series data sets have mixed ARMA effect, the plots cannot provide clear lags to identify. The orders of a mixed ARMA model

usually involve subjective judgement, which results many models in identification step [3] .

In the last decade, artificial intelligence concepts (fuzzy system, neural network, genetic algorithm) have been proposed as forecasting tools. As known, GAs are an effective search and optimisation method that simulates the process of natural selection or survival of the fittest. The GAs are population to population approach, can escape from local optima and are very effective in global search. GA can also handle any kind of objective function and constraints. Reference [8] studied about how to build ARMA model using GA. Reference [9] applied GA as alternative method to identify ARIMA models.

In this paper, we study the use of Genetic Algorithm (GA) as identification method of ARMA models. The best ARMA model from GA is compared to the best ARMA model from Correlogram identification method. The comparative study is applied on the bi-weekly Arc Tube low power demand data [11] from January to November 2005 based on RMSE-in sample and MAPE-out of sample. Softwares used in this study are Minitab and Matlab. GA toolbox developed by [2] is available in Matlab.

II. MIXED AUTOREGRESSIVE

MOVING AVERAGE (ARMA) MODELS

A stationary and invertible process can be represented either in a MA form or in an AR form. A problem with either representation is that it may contain too many parameters, even for a finite-order MA and a finite-order AR model because a high-order model is often needed for good approximation. A large number of parameters reduces efficiency in estimation [12]. Thus, in model building, it may be necessary to include both AR and MA simultaneously as ARMA model:

Page 159: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

152

( ) ( )tp q tB Z B a

(1)

where

1( ) 1 ... pp pB B B (2)

and

1( ) 1 ... qq qB B B (3)

A mixed process of considerable practical importance is ARMA(1,1) (when p = q =1). III. ARMA MODEL IDENTIFICATION

AND MODEL ESTIMATION Model identification refers to the methodology in identifying the required transformations, such as variance stabilising transformations and differencing tranformation. The decision is to find proper orders of p and q . The procedure in correlogram identification method is to match patterns in the sample ACF and sample PACF with the known patterns of the theoretical ACF and PACF of ARIMA models. Table 1 summarizes the important results for selecting p and q [12].

Table 1. Characteristics of theoritical ACF and PACF for

stationary processes Process ACF PACF

AR( p ) Tails off as exponential decay or damped sine wave

Cuts off after lag p

MA( q ) Cuts off after lag q

Tails off as exponential decay or damped sine wave

ARMA( p , q)

Tails off after lag (q- p)

Tails off after lag (p- q)

After identifying a tentative model, the next step is to estimate the parameters in the model. Consider a simple ARMA(1,1) model

11 1 1t t ttZ Z a a

(4)

To calculate ta , we note that

11 1 1tt tta Z aZ

1 1 21 1 1 1 2( )t t t t tZ Z Z aZ

1 21 1 1 22

1 1( ) )t t t tZ Z aZ

... (5)

The equation is clearly nonlinear in the parameter. Hence, for a general ARMA model, a nonlinear least squares estimation procedure must be used to obtain estimates. We can use conditional least squares (CLS) procedure to find the least squares estimates that minimize the error sum of squares:

2

~ ~~ ~ ~~1

( , , ) ( , , , , , )n

tinit initt

S a Z a Z

(6)

where initZ and inita are initial values. One of the algorithms commonly used to achieve a proper and faster convergence in nonlinear estimation is Levenberg-Marquardt. V. GENETIC ALGORITHM FOR ARMA

IDENTIFICATION Genetic Algorithm (GA) is a

population-based search method. GAs categorized as a global search heuristics, classed under Evolutionary Algorithm that use technique inspired by evolutionary biology: principles of natural selection and genetics. Reference [6] developed the idea and concepts behind the GA and many authors have refined his initial approach [10]. The researchers who used GAs to solve complex real-world problems report good results from these techniques, for example the study of [7].

Solutions to GA are represented by data structures, often a fixed length vector, called chromosomes. Chromosomes are composed of genes which have value called alleles, and the position of the gene in the chromosome is called locus [5]. Chromosomes have been represented by vectors, strings, arrays, and tables. Alleles have been represented by a simple binary system of zeroes and ones, integers, real number, and letter of alphabet, or other symbols [4]. Each chromosome has a fitness value which is indicative of the suitability of the chromosome as a solution to the problem [5].

In this research, the steps of GA for ARMA model identification are:

Step 0. [Define] Define operator settings of GA suitable with the problem. This GA uses binary encodings that represent orders of p and q . For

Page 160: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

153

example, chromosome (10010,11010) represents ARMA([1,4];2,[4] ).

Step 1. [Initialization] Create an initial population P of PopSize chromosomes.

Step 2. [Fitness] Evaluate the fitness f(Ci) of each chromosome Ci in the population. The fitness is 1/MSE. High fitness denoted by low MSE.

Step 3. [Selection] Apply Roulette Wheel Selection to produce Mating Population M with size PopSize.

Step 4. [Crossover] Pair all the chromosomes in M at random forming PopSize /2 pairs. Apply crossover with probability pc to each pair and form PopSize chromosomes of offspring.

Step 5. [Mutation] With a probability of mutation pm, mutate the offspring.

Step 6. [Replace] Evaluate fitness of new offspring. Replace the old population with newly generated population.

Step 7. [Test] If the stopping criterion is met (no further improvement in fitness) then stop and return to the best solution in current population, else go to Step 2.

VI. RESULTS

The data analysis is divided into two major parts: building ARMA model using correlogram and GA as identification methods.

6.1. MODEL FROM CORRELOGRAM The data plot in Figure 1 suggests that the series is possibly nonstationary in mean and variance.

Figure 1. Time series plot of data

The Box-Cox analysis as shown in Figure 2 indicates that no transformation is needed and the series is stationary in variance, since the optimal lambda is 1.

Figure 2. Box-Cox transformation analysis of data The indication of nonstationary in the men, suggesting differencing is needed. The sample ACF and PACF of differenced data are shown in Figures 3-4. These plots indicate some tentative models: ARIMA (1,1,0), ARIMA (2,1,0), ARIMA (0,1,1), dan ARIMA (0,1,2).

Figure 3. Sample ACF of differenced data

Figure 4. Sample PACF of differenced data

After identifying tentative models, the next step is to estimate and test the parameters. Parameter estimation and test that summarized in Table 2, show that both second-order of AR and MA are not significant. This step suggests two simple models: ARIMA(1,1,0) and ARIMA(0,1,1). We check the adequacy of the model: white noise and normal residual. From

Index

Dat

a A

sli

126112988470564228141

200000

150000

100000

50000

0

Time Series Plot of Data Asli

Lambda

StD

ev

543210-1-2

160000

140000

120000

100000

80000

60000

40000

20000

Lower C L Upper C L

Limit

Lambda

1,00000

(using 95,0% confidence)

Estimate 0,79974

Lower C L 0,56870Upper C L 1,06275

Best Value

Box-Cox Plot of data

302520151051

1.0

0.8

0.6

0.4

0.2

0.0

-0.2

-0.4

-0.6

-0.8

-1.0

Lag

Aut

ocor

rela

tion

Autocorrelation Function for diff(with 5% significance limits for the autocorrelations)

302520151051

1.0

0.8

0.6

0.4

0.2

0.0

-0.2

-0.4

-0.6

-0.8

-1.0

Lag

Part

ial A

utoc

orre

lati

on

Partial Autocorrelation Function for diff(with 5% significance limits for the partial autocorrelations)

Page 161: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

154

Table 3, it is only ARIMA(1,1,0) that satisfy white noise assumption. From normality test, this model also has normal residual (p-value from Kolmogorov-Smirnov normality test: 0.081).

Tabel 2. Parameter Estimation Testing

Parameter Coeff p-value

ARIMA(1,1,0) 1 -0.5501 0

ARIMA(2,1,0) 1 -0.5948 0

2 -0.0915 0.262

ARIMA(0,1,1) 1 0.594 0

ARIMA(0,1,2) 1 0.6242 0

2 -0.1198 0.164

Tabel 3. White-Noise and normality test

Ljung - Box white noise/

normal?

ARIMA

(1,1,0)

lag 12 24 36 Yes/ Yes P_Va

l 0.441 0.348 0.558

ARIMA

(0,1,1)

lag 12 24 36 No/ - P_Va

l 0.026 0.001 0.001

Thus the final model from identification using correlogram is ARIMA(1,1,0)

t-1t-1tt azzz 5501.0 (7)

6.2. MODEL FROM GA The second part of data analysis is the

application of GA as identification method. GA is applied to differenced data. The analysis is conducted on different number of chromosomes: 10, 20, 40 and 100. The number of generation when the process is stopped, parameter estimates and MSE for each N are reported in Table 4. The best model based on minimum MSE is ARMA([1,3,5] , [ 2,3,4,5]):

0.548 0.4390.236 0.793

0.533

t t-1 t -1 t-3 t-5

t t -2 t-3

t-4 t-5

z z z z 0.007 za a a

a 0.362 a

(8)

from combination of N = 100 and G = 67. Similar with the steps in 6.1, the best

model from GA is tested against white noise and normality assumption. The model satisfies normal residual assumption (p-value > 0.15) but it is not a white noise process. Then we

proceed with modeling the residual using ARMA. The steps of building ARMA model for residual follow Box-Jenkins’s ARIMA modeling as mentioned before: identification, parameter estimation and testing, and adequacy checking. These suggest ARMA(5,4) model to represents the residual.

The final model that satisfies best model selection criteria is:

0.548 0.4390.236 0.793

0.533

t t-1 t -1 t-3 t-5

t t -2 t -3

t-4 t-5 t

z z z z 0.007 za a a

a 0.362 a X

(9)

where

1 2 3

4

1.93 1.612 0.965 .4921.99 1.64 0.356

0.192

t t -1 t-2 t-3 t-4

t-5 t t t t

t

X e e e 0 e-0.201e r r r r

r

(10)

e represents AR process while r represents MA process of residual. The final model satisfies white noise and normal residual assumption.

Tabel 4. GA’s Model Identification Results N 10 20 G 15 43

Para

met

er

-0.527 0 0 0 -0.205 0 0 -0.065 0.202 0.624 -0.049 0.237 -0.135 -0.514 -0.631 -0.948 0.083 0.565 0 0.306

MSE 7.90E+08 9.64E+08 N 40 40 G 83 9

Para

met

er

-0.544 0 -0.536 0 0 0.227 0 0.225

0.434 0.778 0.427 0.773-0.012 -0.523 -0.023 -0.522 0.003 0.374 0 0.378

MSE 7.87E+08 7.78E+08 N 100 100 G 79 40

Para

met

er

-0.556 0 -0.544 00 0.239 0 0.227

0.44 0.792 0.434 0.778 0 -0.536 -0.012 -0.523 0 0.362 0.003 0.374

MSE 7.76E+08 7.87E+08 N 100 G 67

Page 162: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

155

Para

met

er

-0.548 0 0 0.236

0.439 0.793 0 -0.533

0.007 0.362 MSE 7.73E+08

N = number of chromosome G = number of generation when convergence is

reached

VII. THE COMPARISON The best model from correlogram is compared to the best model from GA based on RMSE in-sample and MAPE out of sample. The results are summarized in Table 5.

Table 5. RMSE and MAPE from Correlogram and GA

Correlogram GA

RMSE 34.286,086 25.243,370

MAPE 12,47%. 10,657%. VIII. CONCLUSION

ARMA identification method using GA produces smaller RMSE in-sample and MAPE out-of-sample, compared to Correlogram identification method. This study has limitation on the use of MSE as single criteria in fitness function. Further researches can be done by including other criterias on the fitness function.

ACKNOWLEDGEMENTS The authors are very grateful to Penelitian Produktif ITS Tahun Anggaran 2010 and Institut Teknologi Sepuluh Nopember who kindly supported this research funding. REFERENCES [1] Box, G.E.P., Jenkins, G.M. and Reissel,

G.C. 1994. Time Series Analysis Forecasting and Control, 3rd Edition. Englewood Cliffs : Prentice Hall.

[2] Chipperfield, A.J. and Fleming, P.J. 1995. The MATLAB Genetic Algorithm Toolbox. Applied Control Techniques Using MATLAB, IEE Colloquium, 101-104.

[3] Cryer, J.D. and Chan, Kung-Sik. 2008. Time Series Analysis, with Applicationa in R, Second Edition. Springer.

[4] Gen, M. and Cheng, R. 2000. Genetic Algorithms and Engineering

Optimization. Canada: John Wiley & Son Inc.

[5] Goldberg, D. E. 1989. Genetic Algorithms in Search, Optimization and Machine Learning. New York, USA: Addison-Wesley.

[6] Holland J.M. 1975. Adaptation in Natural and Artificial Systems. Ann Arbor: The University of Michigan Press.

[7] Irhamah and Ismail, Z. 2009. A Breeder Genetic Algorithm for Vehicle Routing Problem with Stochastic Demands. Journal of Applied Sciences Research, 5(11), 1998-2005.

[8] Minerva, T. and Poli, I. 2001. Building ARMA model with Genetic Algorithms. Lecture Notes in Computer Science, 2037, 335-342

[9] Ong, C.S., Huang, J.J. and Tzeng G.H. 2005. Model identification of ARIMA family using genetic algorithms. Journal Applied Mathematics and Computation, 164, 885-912

[10] Ortiz, F., Simpson, J. R., Pignatiello, J. J., and Langner, A. H. 2004. A Genetic Algorithm approach to Multiple-Response Optimization. Journal of Quality Technology. 36 (4).

[11] Sudarsono, I. 2008. Peramalan Arc Tube Dalam Pembuatan Lampu Light Capsule Super (Lcs) Dengan Metode Arima Box-Jenkins Di Pt. Panasonic Lighting Indonesia. Final project: Institut Teknologi Sepuluh Nopember

[12] Wei, W.W.S. 2006. Time Series Univariate and Multivariate Methods, Second Edition. Canada: Addison Wesley Publishing Company, Inc.

Page 163: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

156

A Particle Swarm Optimization for Employee Placement Problems in the Competency Based Human Resource Management System

Joko Siswanto(1) and The Jin Ai(2)

(1)Industrial Management Research Group, Industrial Technology Faculty, Institut Teknologi Bandung, INDONESIA

(Email: [email protected]) (2)Department of Industrial Engineering, Universitas Atma Jaya Yogyakarta, INDONESIA

(Email: [email protected])

ABSTRACT This paper discussed on how particle swarm optimization could be applied for solving the employee placement problems in the competency based human resource management. The employee placement problems are the problems to simultaneously place many people to many jobs in an organization. After the particle swarm mechanism to solve the problem is defined and explained, simple case study is presented to illustrate the capability of the proposed method. KEYWORDS Employee placement problem; Human resource management; Competency based human resource management system; Particle swarm optimization; Evolutionary computing. I. INTRODUCTION Competency Based Human Resource Management (CBHRM) System is a standardized process to manage people optimally in organization from recruitment, selection, placement up to termination process based on job competency profiles and individual competencies in order to achieve organization goals, missions and vision [1]. One of CBHRM system function is placement. It is a process to put the right persons at the right places at the right time which is very critical for the success of any modern organizations. Usually, a placement problem involves a multi criteria decision making process. At a simplest case, an employee can be rotated or promoted to a certain job within an organization one by one sequentially based on a set of criteria of past performance, current competencies and future expectations. But sometimes, in more complex problem, organization needs to place many people to many jobs, even for the whole organization, simultaneously. This paper will demonstrate the application of Particle Swarm

Optimization (PSO) to find best methods for these employee placement problems. PSO is a population based search method proposed by Kennedy and Eberhart [2], which were motivated by the group organism behavior such as bee swarm, fish school, and bird flock. PSO imitated the physical movements of the individuals in the swarm as a searching method, altogether with its cognitive and social behavior as local and global exploration abilities. One of PSO advantage is its simplicity of its iteration step which only consists of updating two set of equations. PSO is widely used as a solution methodology for solving numerous combinatorial optimization problem such as job shop scheduling [3], vehicle routing [4], and project scheduling [5]. Due to its simplicity and unexplored potential in the HRM area, this paper will discuss on how particle swarm optimization could be applied for solving the employee placement problems in the CBHRM system. Specifically, it will describe on how the solution of the problem, which is the placement of the employees, could be represented as a multi-dimensional particle. Also, the decoding method for translating particle into employee placement is also explained. Simple case study will be presented at the end of this paper to illustrate the capability of the proposed particle swarm optimization algorithm for solving the employee placement problem. The advantages and disadvantages of this algorithm will also be discussed further, altogether with its opportunity for improvement and extension.

Page 164: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

157

II. EMPLOYEE PLACEMENT PROBLEM

A. Problem Definition The employee placement problem (EPP) in an organization can be defined as the problem to place many employees to many jobs simultaneously based on a set of criteria of past performance, current competencies and future expectations. Regarding to the competencies criterion, the employees’ competencies should be aligned with the job competency profile. The job competency profiles provide a list of competencies and the minimal scores on those competencies required to hold the jobs, while the employees’ competencies are the quantitative score of each employee on those competencies. The minimal score on a competency is the quantification of capability required on the competency. Therefore, employee with lower scores than minimal required scores of a certain job position is not qualified to hold that job. [6] For the placement criteria, the closeness among employee’s competencies and job competency profiles is the measurement basis of the competency performance score of an employee on a particular job. The generic EPP could be defined as the problem to place a set of employee consisting of m potential people into a set of n available jobs in order to maximize the total weighted score of the criteria, subject to the required competencies. In this generic definition, it is assumed that a job can be filled at most by a single employee. Each criterion may comprised of many sub-criteria, that is could be defined in hierarchical form. For example the competency criterion can be divided into major competency, supporting competency, and field competency; whereas the major competency is comprised of five sub-competencies. Using a proper methodology, such as the analytic hierarchy process (AHP), the weights of each competency and sub-competency can be determined. In the mathematical formulation defined below, these weights are utilized for obtaining the total weighted score of the criteria as the objective function of the decision problem.

B. Mathematical Formulation The EPP can be formulated as the following integer programming problem:

Maximize

m

i

n

jijijiji xZ

1 1 (1)

Subject to

m

iijx

11 , for j (2)

n

jijx

11, for i (3)

0ijx , for employee i that is not qualified to hold job j (4)

1,0ijx , for i , j (5) where: m : number of potential employees n : number of available jobs xij : binary assignment variable, xij = 1 if

employee i assigned to job j, xij = 0 otherwise

i : index of employee, i = 1 … m j : index of job, j = 1 … n αi : past performance score of employee i βij : current competency performance score

of employee i on job j γij : future expectation score of employee i

on job j The objective function in Eq. 1 is showing that the higher the past performance of an employee, the bigger chance the employee being placed in any jobs. Also, it is implied that the employee placement tends to place an employee in a job that is maximizing the current competency performance and the future expectation scores in all jobs. Eq. 2 states that all job to be fulfilled by at most one employee. Whenever no employee is qualified to hold a job, it is not necessary to place any employee to that job. Eq. 3 shows that one employee is placed at most to one job. In the Eq. 4, the binary assignment variables are limited by employees’ qualification on the available jobs. The variables domain is defined in the Eq. 5. In nature, the mathematical formulation of EPP has m·n binary variables. Therefore, if the EPP is being solved using total enumeration

Page 165: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

158

technique, it has 2m n alternative solutions that should be evaluated. III. DATA PREPROCESSING There are three steps of preprocessing the human resource data into parameters required in the EPP: job and employee sets definition, CBHRM data extraction, and criteria evaluation. A. Job and Employee Sets Definition In the first step, the number of available jobs is identified. For the job with many available positions, every position is defined as different job so that each job can be fulfill only by single employee. The job ID is assigned based on its importance or rank, i.e. the first job (j=1) is the most important job or the highest-rank job and the last job (j=n) is the least important job or the lowest-rank job. After the job set is defined, the employee set is defined, i.e. by listing the candidates that are possible to be placed at least one job in the job set. B. CBHRM Data Extraction In this step, the CBHRM data is extracted to find the job competency profiles for each job in the job set, past performances of each employee in the employee set, current competency performances of employees, and future expectations of employee placed into particular job. C. Criteria Evaluation Using the AHP method, the particular CBHRM data is being processed into the score criteria: the past performance score of employee i (αi), the current competency performance score of employee i on job j (βij), and the future expectation score of employee i on job j (γij). At the end of this step, all parameters required in the EPP are available so that the EPP is ready to be solved. IV. PSO METHOD FOR SOLVING EPP A. PSO Algorithm [7] As mentioned before, PSO is a population based search method that imitated the physical movements of the individuals in the swarm as a searching method. In PSO, a swarm of L particles served as searching agent for a specific problem solution. A particle’s position ( l ), which consists of H dimensions, is

representing a solution of the problem. The ability of a particle to search for solution is represented by its velocity vector ( l ) which drives particle movement. In the PSO iteration step, every particle moves from one position to the next based on its velocity. Moving from one position to another, a particle is evaluating different prospective solution of the problem. The basic particle movement equation is presented below:

1 1lh lh lh (6) where:

1lh : Position of the thl particle at the thh dimension in the 1 th

iteration lh : Position of the thl particle at the

thh dimension in the th iteration

1lh : Velocity of the thl particle at the thh dimension in the 1 th

iteration PSO also imitated swarm’s cognitive and social behavior as local and global search abilities. In the basic version of PSO, the particle’s personal best position ( l ) and the global best position ( g ) are always updated and maintained. The personal best position of a particle, which expresses the cognitive behavior, is defined as the position that gives the best objective function among the positions that have been visited by the particle. Once a particle reaches a position that has a better objective function than the previous best objective function for this particle, i.e. l lZ Z , the personal best position is

updated. The global best position, which expresses the social behavior, is the position that gives the best objective function among the positions that have been visited by all particles in the swarm. Once a particle reaches a position that has a better objective function than the previous best objective function for whole swarm, i.e. l gZ Z , the global best position is also updated. The personal best and global best position are used for updating particle velocity. In each iteration step, the velocity is updated based on three terms: inertia, cognitive learning and social learning terms. The inertia term forces particle to move in the same direction as previous iteration. This term is calculated as a product of current velocity with an inertia

Page 166: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

159

weight ( w ). The cognitive term forces particle to go back to its personal best position. This term is calculated as a product of a random number ( u ), personal best acceleration constant ( pc ), and the difference between personal best position l and current position

l . The social term forces particle to move to the global best position. This term is calculated as a product of a random number (u ), global best acceleration constant ( gc ), and the difference between global best position g and current position l . To be more specific, the velocity updating equation is expressed as follow:

1lh lh p lh lh

g gh lh

w c u

c u

(7)

where: lh : Velocity of the thl particle at the

thh dimension in the th iteration

lh : Personal best position of the thl particle at the thh dimension in the th iteration

gh : Global Personal best position at the thh dimension in the th iteration

In the velocity-updating formula, random numbers is incorporated in order to randomize particle movement. Hence, two different particles may move to different position in the subsequent iteration even though they have similar position, personal best, and global best. Algorithm 1: Basic PSO Algorithm Step 1: Initialization Set the PSO parameters: T , L , w , pc ,

gc . Set the iteration counter, 1 . Generate L particles with random initial

position ( l ) and zero velocity ( 0l ). Set the initial personal best position the

same as its position ( l l ). Step 2: Iteration – Particles Movement Decode each particle into a problem

specific solution and evaluate the objective function of the solution. Set the objective function value as the fitness value of the particle lZ .

Update the personal best position of each particle, set l l if l lZ Z .

Update the global best position, set g l if l gZ Z .

Move each particle based on Eq. 6, after updating particle velocity based on Eq. 7.

Step 3: Termination If the terminating criterion is reached, i.e.

T , the stop the iteration. The solution corresponding with the last global best position is the best solution found by this algorithm.

Otherwise, set the iteration counter 1 , and back to Step 2.

B. Solution Representation In the PSO, a problem specific solution is represented by position of particle in multi-dimensional space. The proposed solution representation of EPP with m employees and n jobs is a m dimensional particle. Each particle dimension is encoded as a real number. These m dimensions are related to employees, in which each employee is represented by one dimension. The position value in each particle dimension will be represented the priority weight of its corresponding employee to be placed into jobs in the decoding steps.

Figure 1. A Solution Representation of EPP (m=5)

C. Decoding Method The decoding method is required to transform a particle (represented by its position) into a problem specific solution, which is the placement of employees into jobs in the EPP. As mentioned before, the first step in the decoding method is the extraction of employee priority weight from the position value. Each employee will be given a priority weight from its corresponding particle dimension. For example, the particle depicted on Fig. 1 can be transformed into following priority weight for five employees: 1.075, 0.344, 3.150, 4.593, 2.728. It is defined that the priority of an employee to be placed into jobs is correspond to its priority

Page 167: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

160

weight. Therefore, employee with higher priority weight will be given more priority than employee with lower priority weight. So, continuing the example, the fourth employee will be given the first priority and finally the second employee will be given the last priority. The complete information related to employee priority could be kept into a list illustrated in Table 1.

Table 1. An Employee Priority List Employee ID Priority Weight Priority Rank

4 4.593 1 3 3.150 2 5 2.728 3 1 1.075 4 2 0.344 5

After the employee priority list is created, the placement of employee into job is performed. One by one each employee in the employee priority list, starting from the first rank, is placed into a job considering the rank of job, availability of job, and employee’s qualification. An employee will be placed at the highest rank job that is matched with employee qualification and has not assigned to other employee yet. It is possible to have a situation where is no more available job for an employee. Finally, the total employee placement could be conducted and the result could be displayed as illustrated in Table 2.

Table 2. An Employee Placement Employee ID Job ID

4 2 3 – 5 1 1 3 2 –

It is implied from the example illustrated in Table 2, that the fourth employee is not qualified for the first job so that this employee is assigned to the second job. Also, the third employee is qualified only for the second job. Since the second job is already assigned to the fourth employee, this employee could not be placed at any job. The fifth employee is qualified for the first job and the first employee is met the qualification of the third job. Therefore, no more job available for the second employee.

V. CASE STUDY A simple case study is conducted to illustrate the capability of the proposed particle swarm optimization algorithm for solving the employee placement problem. The case comprises of a problem to place five employees into three available jobs. Hypothetical CBHRM data is used here, which consists of job competency profiles, past performances of each employee, current competency performances of employees and future expectations of employee placed into particular job. To test the performance of the proposed PSO, the algorithm is coded into computer program using C# language. PSO parameters used to solve this case are: number of particle L = 30, number of iteration T = 200, decreasing inertia weight w from 0.9 to 0.4, personal best acceleration constant cp = 2, and global best acceleration constant cg = 2. Since the PSO has random property, five replication of the algorithm is run. For comparison purpose, total enumeration of possible solutions is performed. All possible solutions are evaluated, so that the best employee placement can be determined. Among the five PSO replications performed, three replications provide the same result as the best employee placement and the other replications provide a solution which its objective function is very close with the objective function of the best employee placement. VI. CONCLUSIONS AND FURTHER

WORKS The simple case study above shows that the proposed solution representation and decoding method are effective for solving the EPP using basic PSO. The effectiveness of this method is still need to be confirmed using larger sized and real-world problem. It is noted that the result on this paper is gained by pure PSO algorithm. Hence, it is possible to improve the result by more sophisticated PSO variants and features. Also, it is possible to hybridize this PSO with other technique, i.e. local search method. It is also possible to improve the performance of the

Page 168: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

161

proposed algorithm by parameter optimization and programming implementation. Integrating this EPP solving module inside the CBHRM system, including automated data extraction, is the ultimate direction of this research. REFERENCES [1]. Siswanto, J (2007). "Integrated

Competency Based Human Resource Management System: Implemented Model in Indonesian Crown Corporations," Proceedings of the 8th Asia Pacific Industrial Engineering and Management Systems (APIEMS 2007) Conference, Kaoshiung, Taiwan.

[2]. Kennedy, J and Eberhart, R (1995). "Particle Swarm Optimization," Proceedings of IEEE International Conference on Neural Networks, Vol 4, pp 1942-1948.

[3]. Pongchairerks, P and Kachitvichyanukul, V (2009). "A two-level Particle Swarm Optimisation algorithm on Job-Shop Scheduling Problems," International Journal of Operational Research, Vol 4, No 2, pp 390-411.

[4]. Ai, TJ and Kachitvichyanukul, V (2009). "Particle swarm optimization and two solution representations for solving the capacitated vehicle routing problem," Computers and Industrial Engineering, Vol 56, No 1, pp 380-387.

[5]. Zhang, H, Li, H and Tama, CM (2006). "Particle swarm optimization for resource-constrained project scheduling," International Journal of Project Management, Vol 24, No 1, pp 83-92.

[6]. Siswanto, J, Cahyono, E and Wangi, S (2009). "An optimal method of job employee matching in corporate organization," Proceedings of the 4th International Conference on Mathematics and Statistics (ICoMS 2009), Bandar Lampung, Indonesia.

[7]. Ai, TJ (2008). "Particle swarm optimization for generalized vehicle routing problem," Dissertation, Asian Institute of Technology, Thailand

Page 169: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

162

Measuring Similarity between Wavelet Function and Transient in a Signal with Symmetric Distance Coefficient

Nemuel Daniel Pah(1)

(1)Department Electrical Engineering, Universitas Surabaya, Surabaya, Indonesia (Email: [email protected])

ABSTRACT Wavelet transform has been developed and applied in various areas of science and engineering. It offers better signal processing capabilities due to the existence of a large number of wavelet functions. The superiority comes with a requirement of methods to select a proper wavelet function for each signal or application. Selection methods based on signal properties are not always applicable due to the lack of mathematical definition of most analyzed signals. Wavelet function can be selected based on its shape similarity to interested transient in input signal. The selection method is subjective since there is no quantified parameter to measure this similarity. This paper introduces a new parameter, symmetric distance coefficient (SDC) to measure similarity between wavelet function and transient in a signal. It is based on a fact that wavelet coefficients of a transient that has similar shape and similar time support to a wavelet function are always symmetric. The parameter measures similarity by measuring the degree of symmetry in wavelet coefficients. The paper also reports an experiment to demonstrate procedures to measure this similarity using SDC. KEYWORDS Wavelet; symmetric; transient; transformation. I.INTRODUCTION Wavelet Transform (WT) is a recent development in signal processing [1, 2]. Wavelet transform of a signal f L2(R) is defined as a correlation between the signal and a dilated-shifted wavelet function ψs,σ.

dt

st

stffsWf s )(1)(,),( **

,

(1)

The magnitude of wavelet coefficient, |Wf(s, σ)|, indicates how well they correlate around t = σ, and is determined by the matching between the properties of wavelet function and the properties of the analyzed signal. WT offers a large option of wavelet functions. This

flexibility comes with a need of methods to select wavelet function.

Common wavelet selection methods were based on the matching between the properties of wavelet function and the properties of the analyzed signal [1-8]. The methods require both, the wavelet function and the analyzed signal, to be defined mathematically. All wavelet functions are defined mathematically, but, most analyzed signals are natural signals with lack of mathematical expression. Other wavelet function selection method was based on visual shape similarity between interested transient in input signal and wavelet function [9-14]. The method, was not accurate since the similarity was only judged by visual inspection.

This paper introduces a new parameter to measure similarity between wavelet function and transient in a signal by measuring the degree of symmetry in wavelet coefficients. The new parameter is called ‘symmetric distance coefficient’ (SDC) [15]. This paper is organized into five sections. Following this introduction, Section II elaborates the issue of measuring shape similarity. Section III definition of SDC, while Section IV presents the procedure and experimets to measure shape similarity between a wavelet function and transients in a signal. Finally, Section V gives the concluding remark. II. MEASURING SHAPE SIMILARITY Shape similarity between a wavelet function and a transient in a signal can be observed on the symmetric pattern in wavelet coefficients in the neighborhood of the center of the transient, t = tc [16]. It is based on the fact that correlation function of two similar shape functions is always symmetric (even-symmetric). Consequently, if a wavelet function has similar shape to a transient at t = tc, the wavelet coefficients in the neighborhood of σ = tc are even-symmetric. This symmetry

Page 170: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

163

feature is not altered by signal amplification. There is a difficulty in using the symmetric nature to measure shape similarity because in the real world, it is nearly impossible to find a wavelet function that has exactly a similar shape to any transients in input signal. One can only find a wavelet function that is almost similar. This condition will produce nearly even-symmetric wavelet coefficients. To measure this ‘nearly similar’ condition, it is necessary to have a parameter to measure symmetry in gradual levels. Unfortunately, symmetry is only defined as a discrete parameter (i.e. symmetry or asymmetry). The new parameter, introduced in this paper, was developed to measure symmetry in gradual levels. The parameter is applied to the case of wavelet function selection. III. SYMMETRIC DISTANCE COEFFICIENT A function, f(t), is considered symmetric around t = tc if

],[)()(

ss ttcc tftf

(2) where is any value in a time support of [-ts, ts]. Equation (2) strictly distinguishes between symmetric and asymmetric function. SDC measures the degree of symmetry in gradual levels. It is inspired by the concept of symmetric distance in image analysis that measures the minimum effort required to modify pattern in an image into its symmetric shape [17-19].

SDC is the ratio between the energy of the difference between left and right side of a function around its center t = tc, and the energy of the function within the same time support of t = [tc-ts , tc+ts ]. It is illustrated in Fig. 1.

s

s

s

t

tc

t

cc

sc

dtf

dtftfttSDC

)(

)()(),(

2

0

2

(3)

Figure 1. A nearly even-symmetric function at t = tc with time support [tc-ts , tc+ts]. The SDC(tc,ts) is the ratio of

the energy of (A-B) to the total energy of A and B. The range of SDC in (3) is [0, 2] which

does not meet common sense. The equation is modified to shift its range to [-1,1] where -1 represents odd-symmetry, 0 represents asymmetry, and 1 represents even-symmetry as in (4).

s

s

s

t

tc

t

cc

sc

dtf

dtftfttSDC

)(

)()(1),(

2

0

2

(4)

SDC is independent to input signal

amplification. Suppose a signal, f(t), is even-symmetry around t = 0 i.e. ],[)()(

ss ttttftf . Another signal, g(t), is

equal to f(t) but asymmetric due to ],0[)(

sttte

:

],0()()(]0,[)(

)(s

s

tttetftttf

tg

(5) then SDC at t = 0 with support [-ts ,ts] is:

s

s

t

t

s

deeff

detSDC

0

22

0

2

)()()(2)(2

)(1),0(

(6)

If the signal is then amplified by a factor of A:

],0()()(]0,[)(

)(s

s

ttteAtfAtttfA

tg

(7)

Page 171: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

164

the SDC becomes

s

s

t

t

s

deAAeAffA

deAtSDC

0

2222

0

22

)()()(2)(2

)(1),0(

(8) By canceling out the amplification factors, A, equation (8) proves that SDC can measure the level of symmetry, independent to the strength of input signal or transient.

The ability to measure symmetry independent to signal strength may create problem if the input signal has too many unexpected small ripples (see Fig. 3). To avoid measuring SDC of unwanted small transients, a threshold level, th, is added to the equation to ignore transients with energy less than the threshold level.

else

thdtfifttSDC

thttSDC

s

s

t

tcsc

scth

0

)(),(

),,(

2

(9)

Equation (8) and (9) are the mathematic definition of SDC.

Fig. 2 shows SDCth(t,0.25,0.1) of even-symmetric, nearly even-symmetric, odd-symmetric, nearly odd-symmetric and asymmetric transients located at t = 0.75 s, 1.75 s, 2.75 s, 3.75 s, and 4.75 s, respectively. The SDC of the first transient, SDCth(0.75,0.25,0.1), is 0.99. It means that the transient is 99% even-symmetry. The second transient is only 90% even-symmetry, while the third to fifth transients are 99% odd-symmetry, 92% odd-symmetry, and 0% symmetry, respectively. The figure shows how SDC can measure various types and levels of symmetry.

Fig. 3 shows the necessity of using threshold. It shows SDC of a signal with a symmetric transient at t = 0.475 s and small noises. When threshold was set to zero (no threshold) the SDC of the investigated transient was obscured by a large number of SDCs of the noises (Fig. 3b). When a threshold value of 5 was used, the SDC of the noises were removed (Fig. 3c).

Figure 2. The SDC of various types and levels of

symmetry.

Figure 3. a) A signal with an even-symmetric transient

added with noises. b) The SDCth(t,0.125,0), c) The SDCth(t,0.125,5).

IV. MEASURING PROCEDURES AND EXPERIMENT The procedure to measure similarity between a wavelet function, ψs,σ, and a transient located at t = tc in an input signal, f(t), is as follows: 1. Apply WT to f(t) using the selected wavelet

function, ψs,σ, to the whole time span of f(t). 2. Calculate SDCth(tc,ts,th) of the wavelet

coefficients at time location = tc. The time support parameter, ts, should be half of the selected wavelet function’s time support or half of the interested transient’s time support. Use an appropriate threshold, th, to ignore small ripples in wavelet

Page 172: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

165

coefficients. 3. The value of SDCth(tc,ts,th) indicates the

degree of shape similarity between the selected wavelet function and the transient. The closest the value to 1, the more similar the two functions.

The parameter and procedure was applied to electrocardiograph (ECG) signal [7, 20] to find a wavelet function that has the most similar shape to T-waves (marked with ‘T’ in Fig. 4a). An ECG signal, downloaded from MIT-BIH PhysioBank database record aami3a.dat (AAMI-EC13) [21] was used in this experiment (Fig. 4a). The signal was re-sampled with a sampling rate of 360 Hz. Wavelet coefficients were calculated using various wavelet functions. However, the results of only three wavelet functions are presented in this paper (i.e. db2 at scale 28, Gaus2 at scale 8, and Mexh at scale 12) due to their significant results. The SDC calculation was then applied to all wavelet coefficients with a half time support of 0.158 s and a threshold value of 0.1. The results (Fig. 4b-d) suggested that db2 at scale 28 is the wavelet function with the most similar shape to T-waves. The average SDCth(t,0.158,0.1) of the four T-waves when calculated using db2 wavelet at scale 28 was 0.915. The value was higher than that of Gaus2 wavelet at scale 8 and Mexh wavelet at scale 12 (0.88, and 0.82, respectively). The experiment suggests that the parameter and the procedure can be used to find wavelet function that has the most similar shape to an interested transient. V. CONCLUSION This paper introduces a new parameter called ‘symmetric distance coefficient’ (SDC) and the procedure to measure shape similarity between a wavelet function and a transient in a signal. The parameter measures the similarity by measuring the symmetric pattern in wavelet coefficient. It is based on the fact that wavelet transform of a transient that has the same shape as a wavelet function is always symmetric. The parameter is useful as a guidance to select suitable wavelet function for a specific transient in a signal or application.

The paper also presents an experiment to demonstrate the procedure and the parameter’s

ability to measure shape similarity between a wavelet function and transients in signal. The experiment demonstrates how the parameter can be used to find a wavelet function that has the most similar shape to T-waves in ECG signal.

Figure 4. a) ECG signal (MIT-BIH aami3a.dat – AAMI

EC13) with sampling rate 360 Hz, b) – d) The SDCth(t,0.158,0.1) calculated on wavelet coefficients using Db2 at scale 28, Gaus2 at scale 8, and Mexh at

scale 12, respectively. REFERENCES [1] I. Daubechies, Ten Lectures on Wavelets.

Pennsylvania: SIAM, 1992. [2] S. Mallat, A Wavelet Tour of Signal

Processing. London: Academic Press, 1999.

[3] S. Mallat and W. L. Hwang, "Singularity Detection and Processing with Wavelets," IEEE Transaction on Information Theory, vol. 38, pp. 617-643, 1992.

[4] N. D. Pah and D. K. Kumar, "Thresholding Wavelet Networks for Signal Classification," International Journal of Wavelets, Multiresolution, and Information Technology, vol. 1, pp. 243-261, 2003.

[5] D. K. Kumar and N. D. Pah, "Neural Network and Wavelet Decomposition for

Page 173: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

166

Classification of Surface Electromyography," Electromyography and Clinical Neurophysiology, vol. 40, 2000.

[6] J.-K. Zhang, T. N. Davidson, and K. M. Wong, "Efficient Design of Orthonormal Wavelet Bases for Signal Representation," IEEE Transaction on Signal Processing, vol. 52, pp. 1983-1996, 2004.

[7] C. Li, C. Zheng, and C. Tai, "Detection of ECG Characteristic Points Using Wavelet Transform," IEEE Transaction on Biomedical Engineering, vol. 42, pp. 21-28, 1995.

[8] S. Karlsson, J. Yu, and M. Akay, "Time-Frequency Analysis of Myoelectric Signals During Dynamic Contractions: A Comparative Study," IEEE Transaction on Biomedical Engineering, vol. 47, pp. 228-238, 2000.

[9] S. Kadambe and G. F. Boudreaux-Bartels, "Application of the Wavelet Transform for Pitch Detection of Speech Signals," IEEE Transaction on Information Theory, vol. 38, pp. 917-924, 1992.

[10] A. J. Hoffman and C. J. A. Tollig, "The Application of Classification Wavelet Networks to the Recognition of Transient Signals," IEEE, pp. 407-410, 1999.

[11] V. J. Samar, "Wavelet Analysis of Neuroelectric Waveforms: A Conceptual Tutorial," Brain and Language, pp. 7-60, 1999.

[12] R. Q. Quiroga, O. W. Sakowitz, E. Basar, and M. Schurmann, "Wavelet Transform in the Analysis of the Frequency Composition of Evoked Potentials," Brain Research Protocols, vol. 8, pp. 16-24, 2001.

[13] R. Zhang, G. McAllister, B. Scotney, S. McClean, and G. Houston, "Combining Wavelet Analysis and Bayesian Networks for Classification of Auditory Brainstem Response," IEEE Transaction on Information Technology in Biomedicine, vol. 10, pp. 458-466, 2006.

[14] P. Zhou and W. Z. Raymer, "Motor Unit Action Potential number Estimation in the Surface Electromyogram: Wavelet matching Method and Its Performance Boundary," presented at The 1st International IEEE EMBS Conference on Neural Engineering, Capri Island, Italy, 2003.

[15] N. D. Pah, "Measuring Wavelet Suitability with Symmetric Distance Coefficient," presented at Seminar Nasional Soft Computing, Intelligent System and Information Technology, Surabaya, Indonesia, 2005.

[16] P. Abry and P. Flandrin, "Multiresolution Transient Detection," presented at Proceedings of the IEEE-SP International Symposium on Time-Frequency and Time-Scale Analysis, 1994.

[17] Y. Keller and Y. Shkolnisky, "A Signal Processing Approach to Symmetry Detection," IEEE Transaction on Image Processing, vol. 15, pp. 2198-2206, 2006.

[18] I. Choi and S.-I. Chien, "A Generalized Symmetry Transform with Selective Attention Capability for Specific Corner Angles," IEEE Signal Processing Letters, vol. 11, pp. 255-257, 2004.

[19] A. Imiya, T. Ueno, and I. Fermin, "Symmetry Detection by Random Sampling and Voting Process for Motion Analysis," International Journal of Pattern Recognition, vol. 17, pp. 83-125, 2003.

[20] J. S. Sahambi, S. N. Tandon, and R. K. P. Bhatt, "Using Wavelet Transform for ECG Characterization," in IEEE Engineering in Medicine and Biology, 1997, pp. 77-83.

[21] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, and H. E. Stanley, "PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Recource for Complex Physiologic Signals," Circulation, vol. 101, pp. e215-e220, 2000.

Page 174: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

167

An Implementation of Investment Analysis using Fuzzy Mathematics

Novriana Sumarti and Qino Danny Industrial and Financial Mathematics Research Group, Institut Teknologi Bandung, Indonesia

Email: [email protected]

ABSTRACT Buckley [1] develops fuzzy analogues of problems in financial mathematics, such as the fuzzy present value and fuzzy future value of fuzzy cash amounts, using fuzzy interest rates. Using fuzzy approach in analyzing problems in financial mathematics, it tolerates facts happened in real world that future cash amounts and interest rates are quite often in estimation values. In this paper we implement the fuzzy analysis in valuing a new investment on a photocopy service at Bandung. In valuing this investment, we analyze some scenarios related to the choices of the machines’ types, the appropriate time to buy goods, and need of additional sale activity on top of the main business. Using data from an existing and similar business, we can determine the best investment scenario which can be choose. KEYWORDS Financial Mathematics; cash flow analysis; investment valuation; Fuzzy logic. I. INTRODUCTION Uncertainty and vagueness in investment valuation are sometimes difficult to deal with. To represent them mathematically and to provide formalized tools for dealing with the imprecision, the fuzzy sets theory was specifically design to those purposes. The main characteristic of fuzzy system is its suitableness for uncertain or approximate reasoning, especially for the system with a mathematical model that is difficult to derive. Furthermore, fuzzy logic allows decision making with estimated values under incomplete or uncertain information. Fuzzy systems have been used in decision making process in engineering since it was proposed by Lotfi Zadeh in his paper [7]. In financial mathematics, Buckley [1] had done the pioneer works since 1987, explaining the fuzzy point of view of some important definitions and equations of mathematics finance. Nowadays there are many researches on the implementation of fuzzy logic in this area, such as in option pricing using Black

Scholes formula [4], real options valuation [2], insurance [6], and determination of factors in macroeconomics [3]. In this paper, the fuzzy analysis is used in valuing a new investment of photocopy service, which is a promising business in Indonesia, especially in a perimeter of schools and universities. The valuation of this business uses data from the existing business in Bandung, Indonesia. II. FINANCE IN FUZZY

Denote 1 2 3( , , )A a a a and 1 2 3( , , )B b b b two fuzzy numbers. Another way to write fuzzy numbers [5] is in the difference between the middle number with the left and right numbers, such as

2 2 1 3 2( , , )A a a a a a or simply 1 2( , , )A a ,

and 2 2 1 3 2( , , )B b b b b b or 1 2( , , )B b . The arithmetic operations between them become

1 1 2 2( ,( ) ( ),( ) ( ))A B a b a b a b ,

1 2 2 1( ,( ) ( ),( ) ( ))A B a b a b a b ,

1 1 2 2( , ( ) ( ), ( ) ( ))A B ab a b b a a b b a

2 1

2 1

1 1, ,( ( )) ( ( ))

a aa a a a a a aA

.

Now we discuss future values after n period:

(1 )nnS A r , where A is fuzzy amount

invested today at rate r per period. Notes that and are fuzzy multiplication and addition operations. In present value analysis, we define two formulae:

1. 1 ( , )PV S n A if and only if A is a

fuzzy number and (1 )nA r S .

2. 2 ( , )PV S n A if and only if A is a

fuzzy number and (1 ) nA S r .

Page 175: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

168

Theorem: Let 1 ( | , )x S n and

2 ( | , )x S n denotes the membership

functions for 1 ( , )PV S n and 2 ( , )PV S n

respectively. If S is negative, 1 ( , )PV S n is exist and the membership function

1 ( | , )x S n is determined by

( | ) ( | )(1 ( | )) , 1, 2ni i if y A f y S f y r i

.

On the other hand, if S is positive,

2 ( , )PV S n is exist and the membership

function 2 ( | , )x S n is determined by

3( | ) ( | )(1 ( | )) , 1, 2ni i if y A f y S f y r i

. The proof of the above theorem can be found in [1]. In valuation of an investment, we need to compute Net Present Value (NPV), the difference between the present value of the future cash flows and the amount to be paid initially for investment. If its value is positive, the investment should be made, otherwise it traditionally should not be made. A real option analysis can be performed in addition to the NPV analysis if its value is negative. The fuzzy formula of NPV is defined as in [1]. Let 0 1, , , nA A A A denote cash flow,

where 0A is positive and , 1,2, ,jA j n can be either positive of negative. Let r denotes a positive interest rate. The fuzzy NPV is

0 ( )1

( , ) ( , )n

ik ii

NPV A n A PV A i

,

where is fuzzy addition and

1, negative( )

2, positive

i

i

Ak i

A

.

Another analysis to value an investment is Internal Rate of Return (IRR). This value determines the rate of return when NPV equals zero for a given number n. The higher IRR, the more desirable it is to undertake the

investment. On other way, it can be used to calculate the time when the initial capital for investment is paid back by the cash flows. In the next section we discuss two main plans designed in this business. Plan A considers the type of the photocopy machines. On the other hand, plan B considers the time and the way of purchasing goods for the business. Using data of cash flow and cash out from existing photocopy business in Bandung, Indonesia, all business plans can be valuated. III. VALUATION OF PLAN A Plan A consist of three choices of the photocopy machines will be used. There are two types of photocopy machines based on their prices. The expensive type is IRR 500 with price (23, 0.5, 1) million IDR. The cheap one is 6050 with price (10, 0.5, 1) million IDR. However, there is a significant difference in the operating cost where the cheap type needs higher cost than the expensive one. Three choices in Plan A are explained in figure 1 below.

Figure 1. Plan A

The cash out is the cost from purchasing of two machines, the venue rental, electricity, worker’s salary, equipments for book binding. In addition to the photocopy service, there is a small shop selling stationary, snacks, and beverages, including ice cream. Heuristically this small shop can significantly increase the income because the snacks and ice cream attract customers who wait for the work to be finished. The cost for the shop is also considered as cash out. From analyzing choices 1 to 3, it gives some conclusions. The capital needed in beginning for the choice of two IRR 5000 machines is the largest amount among other choices. The second highest is the choice of mixed IRR 5000 and 6050. In term of the time when the capital could be paid back by the cash flow, both choices need 8 years. On the other hand,

Type of Machines

1. #2 IRR 5000 2. #2 6050

3. #1 IRR 5000, #1 6050

Page 176: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

169

the choice of two 6050 machines could give back in year 6. From these results, the latter is chosen in designing other scenarios in the next section. IV. VALUATION OF PLAN B In Plan B, we make the following different scenarios and it is described in Table 1. The capital owned by the investor is only 50 million IDR. The shortage of cash in each plan is filled by credit scheme from a bank.

Table 1. Scenario B

Descrip-tion

Scenarios 1 2 3 4

#2 6050 Cash Cash

yI & yIII Cash

yI & yIII 3y

credit #2 workers yI yI & yIII yI & yIII yI

#1 Motor-cycle

Cash 2y credit yI

2y credit yI

2y credit

yI #1 Car

Cash 5y credit yI

5y credit yI

5y credit

yI Small shop Cash Cash Cash

yIII

2y credit

yI Scenario 1: All are paid in cash at the first year. Scenario 2: Cash at the first year are paid for one machine, one worker and the shop equipments. In the third year, cash is paid for another machine and worker. A motorcycle and a car are bought on credit schemes for 2 and 5 years respectively. Scenario 3: This is the same with scenario 2 unless the shop opens in the third year. Scenario 4: All goods are bought on credit schemes for different periods of time in the first year. Calculating cash flow and cash out during 8 years, the results show that scenario 1 gives NPV from around 88 -616 million IDR, scenario 2 gives 60-576 million IDR, scenario

3 gives 107-542 million IDR, scenario 4 gives 75-610 million IDR. To conclude, the highest profit gains from scenario 1 where all goods are purchased and paid cash from the first year. However, the debt from the cash shortage of this scenario is also the highest. It means that the risk in this scenario is also the highest. V. CONCLUSIONS Fuzzy analysis on valuation of an investment on photocopy service gives more realistic results due to the characteristic of fuzzy logic which allows uncertainty and imprecision of the data. The results show that the best choice of the machine types is both the cheap one, and the best scenario with the highest profit comes from the highest risky investment too. REFERENCES [8]. J.J. Buckley, The fuzzy mathematics of

finance, Fuzzy Sets and Systems 21 (3) (1987) 257–273.

[9]. Carlsson, C., A fuzzy approach to real option valuation, Fuzzy Sets and Systems, 139 (2003) 297–312.

[10]. C. Kahraman (editor), Fuzzy Engineering Economic with Applications, Spinger-Verlaag Berlin, 2008.

[11]. Lee, CF, Tzeng GH and Wang SY, A new application of fuzzy set theory to the Black–Scholes option pricing model, Expert Systems with Applications, 29 (2005) 330–342.

[12]. G.J. Klir and G. Yuan, Fuzzy Sets and Fuzzy Logic, Theory and Applications, Prentice-Hall, New Jersey, 1995.

[13]. Shapiro, A.F., Fuzzy logic in insurance, Insurance: Mathematics and Economics, 35 (2004) 399–424.

[14]. Zadeh, L.A. (1965). "Fuzzy sets", Information and Control 8 (3): 338–353.

Page 177: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

170

Simulation of Susceptible Areas to the Impact of Storm Tide Flooding along Northern Coasts of Java

Nining Sari Ningsih(1), Safwan Hadi(2), Dwi F. Saputri (3), Farrah Hanifah(4), and Amanda P.Rudiawan

(5) (1)Research Group of Oceanography, Faculty of Earth Sciences and Technology, ITB, Bandung, West Java,

Indonesia Email: [email protected]

(2)Research Group of Oceanography, Faculty of Earth Sciences and Technology, ITB, Bandung, West Java, Indonesia

Email: [email protected] (3)Research Group of Oceanography, Faculty of Earth Sciences and Technology, ITB, Bandung, West Java,

Indonesia Email: [email protected]

(4)Study Program of Oceanography, Faculty of Earth Sciences and Technology, ITB, Bandung, West Java, Indonesia

Email: [email protected] (5)Master Program in Geodetic and Geomatic Engineering, Faculty of Earth Sciences and Technology, ITB,

Bandung, West Java, Indonesia Email: [email protected]

ABSTRACT Storm tide event as the sum of the astronomical tides and storm surges generated by the Cyclones Hagibis and Mitag in November 2007 has been simulated to quantify wave height, run-up, and inundation along the northern coasts of Java by using a two-dimensional ocean model. The applied model was run in the period from 12 November 2007 until 13 December 2007. The model domain covers South China Sea, Philippines Waters, and Pacific Ocean as regions where both cyclones being generated, Java Sea, and coastal areas of northern part of Java. The storm tide event was simulated by imposing tidal elevations at the open boundaries, 6-hourly wind and air pressure data. Tidal elevation data derived from the tide model driver (TMD) of Padman and Erofeeva (2005), while wind and air pressure data were obtained from NCEP (National Centers for Environmental Prediction). The simulation results show that the storm tides had attacked and inundated coastal areas of the northern part of Java. Along the coastal areas, highest surge existed at Losari, which is located at Province of Central Java. Maximum distances of storm tide flooding at each provinces of Java occured at Banten Bay (Banten Province), Penjaringan (DKI Jakarta Province), Cilamaya (West Java Province), Semarang (Central Java Province), and Surabaya and Sampang (East Java Province).

KEYWORDS Storm surges; storm tide; inundation; run-up. I. INTRODUCTION Severe storm surges generated by tropical cyclone in South China Sea, Pacific and Indian Oceans have often hit the northern coasts of Java. For example, (1) severe flooding caused by high astronomical tides and cyclones Hagibis and Mitag, have inundated coastal areas of the northern part of Java during 24 – 27 November 2007, such as Jakarta, Semarang, Tegal, and Brebes, damaging houses and infrastructures, disturbing access to the Soekarno-Hatta International Air Port in Jakarta, and forcing people to leave their homes; and (2) in March 2008, Karimunjawa and Jepara waters in Java Sea were battered by cyclone Nicholas generated in Australian waters, damaging fishing ships, causing several ship accidents, and disturbing tourism activities and delivery of foods to the Karimunjawa Island due to the lack of sea transportation. In the case of flooding of 24 – 27 November 2007, the flooding along the coastal area was worse than previous flooding events because surges coincided with high tides caused by spring tide and perigee conditions. In that case, the advancing surge generated by the cyclones

Page 178: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

171

Hagibis and Mitag combined with the astronomical tide to create a storm tide. Storm tides are the sum of the astronomical tides (the daily changes in water level due to gravitational interactions between the earth, moon, and sun), the storm surges (the transient changes due to the effects of a storm), and long-term changes (sea level rise, seasonal and decadal changes). The storm tide can cause severe flooding in coastal regions, especially in low-lying areas, frequently causing losses of lives and substantial economic damages. The degree of flooding depends on storm intensity, fluctuations in astronomically generated tides, and the slope of the continental shelf. Since coastal zones are at risk of storm tides, it is necessary to study crucial coastal zones affected by storm tide flooding, especially along northern coasts of Java. Moreover, accurate estimates of maximum storm surge (MSS) and inundated (flooded) areas caused by storm tides are necessary, primarily from the view of flood defense. As far as we know, storm tide investigation along the northern coasts of Java is still limited, especially study of vulnerable coastal zones to the impact of storm tide flooding. Therefore, we are interested in studying influences of storm surges combined with the astronomical tide along the northern coasts of Java. In this study, we only considered storm tides as the sum of the astronomical tides and storm surges. Hence, we neglected the influences of the long-term changes. Estimations of surge height, water run-up height, and inundation areas due to the impact of tides and storm surges were carried out in this study by considering the Cyclone Hagibis during 19 – 27 November 2007 and Cyclone Mitag during 20 – 27 November 2007 as a generating force of storm surges and by using a barotropic hydrodynamic numerical model (2D version of Mike 21 developed by [3]). II. MATERIAL AND METHODS A finite volume method is used in the applied model. The method allows for a substantial flexible discretization of the model domain. An unstructured grid comprising of triangle or quadrilateral element was used in the horizontal plane [3]. The computational area

was set from 17o 5’ S – 20o 30’ U and 99o 10’

– 140o 00’ E, which covers South China Sea and Philippines Waters, Pacific Ocean as regions where both cyclones being generated, Java Sea, and coastal areas of northern part of Java (as shown in Figure 1). The grid sizes vary from ± 200 km in the Pacific Ocean, South China Sea, and Philippines Waters to fine resolution of about 5 km in the coastal area. Since the present study focuses on the northern coasts of Java, the higher resolution of grid sizes in those areas is performed of about 150 m in order to include inundation processes in the model simulation.

Figure 1. Computational domain, bathymetry (in meters),

and model meshes.

The storm tide event, which was generated by tides, the Cyclone Hagibis (Figure 2a) with wind speed range about 17.99 – 35.98 m/s and Cyclone Mitag (Figure 2b) with wind speed range about 17.99 – 41.12 m/s, was simulated by imposing tidal elevations at open boundaries, winds, and air pressures. The tidal elevation data was derived from the tide model driver (TMD) of [5]. The TMD is a Matlab

(Sourceof map: http://www.wikimapia.org)

Page 179: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

172

package for accessing the harmonic constituents, and for making predictions of tide height and currents. It has 1/4° x 1/4° resolution and eight tidal constituents (M2, S2, N2, K2, K1, O1, P1, Q1) and was used for predicting the tidal elevations at the open boundaries. We used 6-hourly wind and air pressure data with 2.5° x 2.5° resolution obtained from NCEP (National Centers for Environmental Prediction).

Figure 2. Depression tracks of: (a). the Cyclone Hagibis

during 19 – 27 November 2007 and (b). the Cyclone Mitag during 20 – 27 November 2007. (Source:

http://agora.ex.nii.ac.jp/digital typhoon/year/wnp/2007.html.en).

The numerical simulation was carried out for 31 days (12 November – 13 December 2007), which covered the occurrence of both cyclones. Zero normal flow was applied to solid boundaries, while along open boundaries tidal elevation varying both in time and space were specified to accommodate tidal forcing

and a radiation boundary condition was used for currents. When wind forcing was included, the tilt facility of the applied model has been used to improve the boundary conditions. Tilting provides a correction of water level at each point along the boundary based on the steady state Navier Stokes equations. A detailed description of this tilt facility can be found in [3]. Further, flooding and drying (FAD) capabilities of the model were enabled to simulate water run-up and the inundation processes of storm tides caused by tides and both cyclones. Bathymetry was generated using the 1 minute resolution GEBCO data [4] and topography was resolved using the 90 m resolution Digital Elevation Model (DEM) of the Shuttle Radar Topographic Mission (SRTM), [7]. However, a changed coastline (taken from navigational maps) was constructed for northern coasts of Java to take into account specific topographic features there, which are not described accurately enough in the standard topography. The parameterization of bottom friction is based on Manning’s approach, with the ranges of friction coefficient n from 0.01 for smooth concrete to 0.06 for poor natural channels [1]. Since no other information of bottom friction is available along the computational domain, we used a constant manning number of 0.03125 m-1/3s as suggested by [3]. Horizontal diffusion is needed for numerical stability. It is a Smagorinsky type with the Smagorinsky’s constant = 0.28. III. SIMULATION RESULTS 3.1. Model Verification Following [6], we compared the simulated storm surges, obtained by removing the tidal part from the storm tide simulation, with weekly data of TOPEX Poseidon Sea Level Anomaly (SLA), (http://apdrc.soest.hawaii.edu). A sea-level anomaly is the difference between the total sea-level and the average sea-level for this time of year [2]. Figure 3 shows validation of the simulated storm surge height with the TOPEX Poseidon SLA at the whole computational domain, whereas examples of the surge height validation at a location, such as at Penjaringan (marked by 5 in Figure 5) can be seen in Figure 4.

(a)

(b)

(Sourceof map: http://www.wikimapia.org)

Page 180: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

173

Figure 3. Validation of the computed storm surge height at 00:00 UTC on December 5, 2007. (a). Simulated

results (in cm). Arrows indicate speed and direction of winds (m/s); (b). The TOPEX Poseidon SLA (in cm),

(Source: www.apdrc.soest.hawaii.edu). In general, storm surge heights obtained from the simulation show a good agreement with those of the TOPEX Poseidon SLA. However, the simulated storm surge was smaller than those of the TOPEX Poseidon SLA, probably due to the estimation of the parameters used in the model (e.g., the bottom- and wind-drag coefficients) is not adequate.

Figure 4. Validation of the computed storm surge height at 00:00 UTC on 21 and 28 November 2007; and on 5

and 12 December 2007. Blue line (the simulation results); red dots (the TOPEX Poseidon SLA); red square (period of the cyclone Hagibis); green square (period of the cyclone Mitag), at Penjaringan (marked by Point 5 in

Figure 5). 3.2. Surge Heights and Inundation Areas Figure 5 shows that there were 36 areas most affected by the storm surge generated by the Cyclones Hagibis and Mitag, namely: a). along the northern coast of Banten Province: (1). Merak, (2). Banten Bay, (3). Tirtayasa, and (4). Mauk; (b). along the northern coast of Jakarta Province: (5). Penjaringan and (6). Cilincing; (c). along the northern coast of West Java Province: (7). Karawang Cape, (8). Tambak sumur, (9). Sungaibuntu, (10). Cilamaya, (11). Pamanukan, (12). Indramayu, (13). Balongan, and (14). Cirebon; (d). along the northern coast of Central Java Province: (15). Losari, (16). Tegal, (17). Pekalongan, (18). Gringsing, (19). Semarang, (20). Jepara, (21). Bumimulyo, and (22). Rembang; (e). along the northern coast of East Java Province-1 (Main Land): (23). Tambakboyo, (24). Labuhan, (25). Pangkah Cape, (26). Surabaya, (27). Sidoarjo, (28). Bangil, (29). Probolinggo, (30). Gending, (31). Gerinting Cape, and (32). Panarukan; (e). along the northern coast of East Java Province-2 (Madura Island): (33). Pamekasan, (34). Sampang, (35). Modung Cape, and (36). Sapulu.

(a)

(b)

SLA-TOPEX [m]Model result [m]

Page 181: Proc.CIAM2010[1]

Proceeding In additiomaximum36 locatioare most surge gen

Figure 5. storm surgBay, (3). T

(10). CPekalong

Labuhan, (2Cape, (32).

Figure 6. A

11.6

12.5 1

1

1

2

1

g of Conf. on In

on, the Figum storm surgeons (marked b

prone to thnerated by the

The maximum sge generated by tTirtayasa, (4). MCilamaya, (11). gan, (18). Grings25). Pangkah Ca. Panarukan, (33

Astronomic tide, (perio

12.3

12.2

12.22.2

12

12.7

15 cm

2 3 4

5 6 7 8 9

15 cm

ndustrial and A

ure 5 also e height (MSby Points 1 – he impact ofe Cyclones H

storm surge heigthe Cyclones Ha

Mauk, (5). PenjarPamanukan, (12sing, (19). Sema

ape, (26). Suraba3). Pamekasan, (3

storm tide, and sod of the cyclon

14.2

13.3 13.5

.0

7 13.6

1

14.8

9 1011

12

13

1

Appl. Math., B

shows the SSH) at the – 36), which f the storm Hagibis and

ghts (in cm) at thagibis and Mitagingan, (6). Cilin

2). Indramayu, (1arang, (20). Jeparaya, (27). Sidoarj34). Sampang, (3at the Losari (m

storm surge heigne Hagibis); blue

3.0

8

17.4

13.9

13.53

14 15 16 17

andung-Indone

174

Mitag.LosariCentrahighes17.4 cm

he 36 locations, wg in November 20cing, (7). Karaw13). Balongan, (ra, (21). Bumimjo, (28). Bangil, 35). Modung Ca

marked 15 in Figu

ghts at the Penjare dash square (pe

1

13.0

13.1

13.5

18 19

20

esia 2010

. It can be sei (marked 15al Java Prost surge amonm, at 5:45 UT

which are most s007. Name of Lo

wang Cape, (8). T14). Cirebon, (1

mulyo, (22). Rem(29). Probolingg

ape, and (36). Saure 5).

ringan (marked 5eriod of the cyclo

11.5

12.612.4

21 22 23

een from thein the Figure

ovince), expng the other TC on 5 Dece

susceptible to thocations: (1). MTambaksumur, (5). Losari, (16).

mbang, (23). Tamgo, (30). Gendinapulu. The highe

5 in Figure 5); bone Mitag).

14.9

13.1

13.1

14.5

14.6

14.3

1

24 25 26

27

28

3536

(Source

e Figure 5 the 5), located perienced thstations, abo

ember 2007.

he impact of the erak, (2). Banten9). SungaibuntuTegal, (17).

mbakboyo, (24). ng, (31). Gerintinest surge occurre

brown dash squar

13.6

13.4

12

14.4 13.2

14.114.0

8 29 30 31 32

3334

6

eof map: Google E

hat at he ut

n u,

ng ed

re

4

2.2

Earth)

Page 182: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

175

Figure 6 shows an example of the calculated heights of astronomic tide, storm tide, and storm surge at the Penjaringan (marked 5 in Figure 5). Based on the simulated results in this present study, it is found that the derived storm surge heights are of order of few centimeters (< 20 cm), so it seems that the Cyclones Hagibis and Mitag in November 2007 did not generate high storm surges along the northern coasts of Java. However, as it will be shown later, the coasts of northern Java were still severely flooded. Table 1 shows detailed maximum horizontal distance of storm tide flooding, vertical run-up heights, and total water depth around the location of maximum inundation distance at 6 locations, which represent most susceptible areas along the northern coast of Java to the impact of the storm tide flooding at each province of Java, namely Banten Bay (at the

Banten Province, marked 2 in Figure 5), Penjaringan (at the Jakarta Province, marked 5 in Figure 5), Cilamaya (at the West Java Province, marked 10 in Figure 5), Semarang (at the Central Java Province, marked 19 in Figure 5), and Surabaya and Sampang (at East Java Province, marked 26 and 34 in Figure 5, respectively). Among the 36 locations in Figure 5, Rembang (marked 22 in Figure 5) experienced minimum distance of storm tide flooding (table and figure not shown). An analysis of topography slope shows that the slope of topography may contribute to the distance of storm tide flooding. For instance, topography of the Sampang (marked 34 in Figure 5) is gentle of about 0.012o compared to that of the Rembang of about 0.545o. This difference may contribute to the existence of distant storm tide flooding at the Sampang; whereas the steep slope of the Rembang decreases the amount of flooding.

Table 1. Maximum horizontal distance of storm tide flooding, vertical run-up heights, and total water depth

around the location of maximum distance of inundation

No. Locations

Run – up and Inundation (Flooding)

Maximum Horizontal Distance of Inundation (m)

Vertical Run-up at the Location of Maximum

Distance of Inundation (m)

Total Water Depth around the Location of Maximum Horizontal Distance of

Inundation (m) Storm+ Tides Tides Storm Storm+

Tides Tides Storm Storm+ Tides Tides Storm

Banten Province: 2 Banten Bay 558.5 558.5 0.0 0.483 0.428 0.055 0.15 0.13 0.02 Jakarta Province: 5 Penjaringan 6275.3 5958.7 316.6 0.537 0.483 0.054 0.33 0.28 0.05 West Java Province: 10 Cilamaya 2967.6 2967.6 0.0 0.535 0.489 0.046 0.12 0.08 0.04 Central Java Province: 19 Semarang 1278.3 972.4 305.9 0.570 0.472 0.098 0.17 0.12 0.05 East Java Province-1 (Main Land):

26 Surabaya 5305.4 5305.4 0.0 2.099 2.037 0.062 0.27 0.19 0.08East Java Province-2 (Madura Island):

34 Sampang 6552.3 6552.3 0.0 1.559 1.494 0.065 0.07 0.05 0.02 In this present study, the model results of surge height have been quantitatively and qualitatively verified with the TOPEX Poseidon SLA. In addition, the simulated results have shown the coastal areas of the northern coasts of Java which are most prone to the impact of storm tide flooding. The present simulated results of the susceptible areas to the impact of storm surges are qualitatively similar to those reported

extensively in both electronic and print media. They reported that the prone areas to the impact of the storm tide flooding in November 2007 were the northern coasts of Jakarta, Brebes, Tegal, and Semarang where those areas are similar to the several inundation areas of storm tide obtained from this study, namely Penjaringan and Cilincing at Jakarta (marked by 5 and 6 in Figure 5, respectively), Losari (near the Brebes, marked 15 in Figure

Page 183: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

176

5), Tegal (marked 16 in Figure 5), and Semarang (marked 19 in Figure 5). The waves generated by the tides and cyclones attacked and inundated coastal areas of the northern coasts of Java. In the Banten Province, maximum distance of storm tide flooding existed at the Banten Bay (marked 2 in Figure 5), about 558.5 m with run-up height at the location of maximum inundation of about 0.483 m. In the Jakarta Province, the maximum distance of storm tide flooding reached about 6275.3 m at the Penjaringan (marked by 5 in Figure 5) with run-up height of about 0.537 m. In the West Java Province, it occurred at the Cilamaya (marked by 10 in Figure 5) with the distance of flooding and water run-up height about 2967.6 m and 0.535 m, respectively. In Central Java Province, maximum distance of storm tide flooding

existed at the Semarang (marked 19 in Figure 5), about 1278.3 m with run-up height at the location of maximum inundation of about 0.570 m. In East Java Province, it occurred at Surabaya and Sampang (marked 26 and 34 in Figure 5) with the distance of flooding and water run-up height (at Surabaya) about 5305.4 m and 2.099 m, respectively. Meanwhile, at Sampang they are about 6552.3 m and 1.559 m, respectively (Table 1). In Figures 7 and 8, we present examples of simulated inundation area and the total water depth along the storm tide inundated area at the Penjaringan (marked 5 in Figure 5) and at the Semarang (marked 19 in Figure 5), respectively. In the Figures, the boundaries of the simulated inundation area after the storm tide event are shown as well.

Figure 7. The simulated inundation area (distance and height of flooding, in meters) at the Penjaringan (marked 5 in Figure 5), at 7:15 UTC on 26 November 2007. The figure shows topography contours of 0 and 3 m measured from the mean sea level (MSL); (a). Inundation area due to tides; (b) Inundation area due to storms and tides; and (c) Total water depth of the simulated storm tide

inundation area (in meters)

5958.7

H2= 0.491

H1= 0.483

3057.4 5463.1

H3= 0.473

(a) 6275.3

H2= 0.543

H1= 0.537

3057.4

5606.9

H3= 0.532

(b)

(c)

Page 184: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

177

Figure 8. The simulated inundation area (distance and height of flooding, in meters) at the Semarang (marked 19 in Figure 5), at 19:30 UTC on 30 November 2007. The figure shows topography contours of 0 and 3 m measured from the MSL; (a). Inundation

area due to tides; (b) Inundation area due to storms and tides; and (c) Total water depth of the simulated storm tide inundation area (in meters)

IV. CONCLUSIONS This paper investigated wave height, run-up, and inundation of storm tide along the northern coasts of Java, generated by tides and the Cyclones Hagibis and Mitag in November 2007 by using 2D hydrodynamic model. The present simulated results showed that although the Cyclones Hagibis and Mitag did not generate high storm surges along the coasts of Java (surge heights < 20 cm), the coasts were still severely flooded. The highest surge existed at the Losari (norther coast of Central Java). Meanwhile, the maximum distance of storm tide flooding occurred at the Banten Bay (Banten Province), the Penjaringan (Jakarta Province), the Cilamaya (West Java Province), the Semarang (Central Java Province), and the Surabaya and Sampang (East Java Province). Storm tide investigation performed in this study is important for simulation of areas prone to the impact of storm tide flooding, such as the northern coasts of Java. The results of this present study could be significantly valuable for designing both proper management plans and investment policies in the northern coasts of Java.

ACKNOWLEDGEMENTS Parts of this research were funded by KK Research of ITB in 2010. We gratefully acknowledge the ITB supports. We also greatly appreciate assistance from D. A. Utami, N. Rakhmaputeri, and H. Timotius whose contribution to literature about storm surges aspects have been invaluable REFERENCES [1] Arcement, Jr GJ, and Schneider, VR

(1984). “Guide for selection Manning’s roughness coefficients for natural channels and flood plains,” Technical Report, US Department of Transportation, Federal Highways Administration Reports no. FHWA-TS-84-204, 1984, 62 p.

[2] Chambers D. (1988). “Monitoring El Niño with satellite altimetry”, The University of Texas at Austin, Center for Space Research. Webpage: http://www.tsgc.utexas.edu/info /teamweb.html

[3] DHI Water and Environment (2005). “MIKE 21 & MIKE 3 flow model FM,” Hydrodynamics and Transport Module, Scientific Documentation, Denmark.

H1= 0.472

H2= 0.472

534.4

972.4 (a)

H1= 0.570

H2= 0.569

534.4

1278.3 (b)

(c)

Page 185: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

178

[4] IOC, IHO, and BODC (2003). “General bathymetric chart of the oceans,” Centenary Edition of the GEBCO Digital Atlas, BODC, Liverpool, U.K.

[5] Padman, L, and Erofeeva, S (2005). “Tide model driver (TMD) manual,” Earth & Space Research.

[6] Tkalich, P, Kolomiets, P, and Zheleznyakm M (2009). “Simulation of

wind-induced anomalies in the South-China Sea”. Paper presented at the Asia Oceania Geosciences Society (AOGS) 2009 Annual General Meeting, Singapore.

[7] SRT (2007). “Shuttle Radar Topography Mission X SAR/SRTM,” Webpage: http://www.dlr.de/srtm.

Page 186: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

179

Fuzzy Finite Difference on Calculation of an Individual’ Bank Deposits

Novriana Sumarti and Siti Mardiah Industrial and Financial Mathematics Research Group, Institut Teknologi Bandung, Indonesia

Email: [email protected]

ABSTRACT Fuzzy analogues of problems in financial mathematics had been developed initially more than two decades ago by Buckley [1], such as the fuzzy present and future values of fuzzy cash amounts, using fuzzy compound interest rates. Due to the fact that future cash amounts and interest rates are quite often in estimation values, the fuzzy approach results in solutions whose values in intervals where they might be in. In this paper we implement fuzzy finite difference formulae in calculating the balance of an individual’s bank deposits [2]. It involves not only compound interest, but also some factors of macroeconomics which cause the change in the value of money, in different time periods. For instance the inflation, the taxation, the purchasing power of the capital, etc. Real data taken from an employee of a company has been used to estimate the time when some significant plans can be conducted in the future. KEYWORDS Financial Mathematics; finite difference; compound interest rate; macroeconomics; Fuzzy set theory. I. INTRODUCTION One of main characteristic of fuzzy system is its suitableness for uncertain or approximate reasoning, especially for the system with a mathematical model that is difficult to derive. Furthermore, fuzzy logic allows decision making with estimated values under incomplete or uncertain information. Since it was proposed by Lotfi Zadeh in his paper [3], there were many application of fuzzy set theory made on decision making process in engineering, economics and financial mathematics. Pioneer works in financial Mathematics had been done by Buckley [1], explaining the fuzzy point of view of some important definitions and equations of mathematics finance. Nowadays there are many researches on the implementation of fuzzy logic in this area, such as in option pricing using Black Scholes formula [4], real options valuation [5], insurance [6], and

determination of factors in macroeconomics [7]. In this paper, the fuzzy set theory is firstly explained in section II, and also some definition of macroeconomics’ factors which potentially give effect on the value of money. II. FUZZY APPROACH ON FINANCE Now we discuss fuzzy membership function and fuzzy arithmetic operations used in this research. A fuzzy triangular number

1 2 3( , , )A a a a can described by its membership function

11 2

2 1

32 3

3 2

,

( ) ,

0, others.

A

x a a x aa aa xx a x aa a

(2)

It can be also described by its α-cuts method, ( ) [ ( ), ( )]L RA A A , (3)

where ( )LA and ( )RA are functions of α [0, 1],

1 2 1

3 2 3

( ) ( ),( ) ( ).

L

R

A a a aA a a a

(4)

A α 1a

( )LA 2a

( )RA 3a x

Figure 1: Description of a fuzzy number A Denote [ ( ), ( )]L RB B B another fuzzy number. The membership function of numbers resulted from arithmetic operations between those fuzzy numbers used in this paper are determined by a method [2, 9, 10] equivalent

Page 187: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

180

to the extension principle method, which is written below.

( ) ( ) ( ( ) ( ), ( ) ( ))L L R RA B A B A B ,

( ) ( ) ( ( ) ( ), ( ) ( ))L L R RA B A B A B ,

( ) ( ) [min ( ) ( ), ( ) ( ), ( ) ( ), ( ) ( ),max ( ) ( ), ( ) ( ), ( ) ( ), ( ) ( )],

L L L R

R L R R L L

L R R L R R

A B A B A BA B A B A BA B A B A B

If 0 ( )b for all α, then ( ) 1 1 1( ) [min ( ) , ( ) ,

( ) ( )( ) ( )1 1 1 ( ) , ( ) ,max ( ) ,( ) ( ) ( )1 1 1 ( ) , ( ) , ( ) ],( ) ( ) ( )

L LL R

R R LL R L

L R RR L R

A A A AB BB B

A A AB B B

A A AB B B

.

One way to write fuzzy numbers [8] is in the difference between the middle number with the left and right numbers. For example,

2 2 1 3 2( , , )A a a a a a or simply

1 2( , , )A a . (4) In this paper we will calculate the balance of an individual bank account during a period of time where some macroeconomics factors give influence on the value of money. Those factors being considered here are the inflation, the taxation and the purchasing power of the capital. Inflation is an increase in the price of a basket of goods and services that is representative of the economy as a whole. We use data of Indonesia’s inflation during 2008 – 2010. Taxation is a charge taken on the subjects of a state by the government, or on the members of a corporation or company, by the proper authority. Taxation also applies on the interest of bank deposits if the balance is more than IDR 7.5 million. Finally, the purchasing power is the value of money, as measured by the quantity and quality of products and services it can buy. Data used to show the purchasing power is the consumer price index (CPI) from Indonesia during 2008-2010.

III. FINITE DIFFERENCE OF THE BALANCE OF BANK ACCOUNT The balance of a bank deposit increases due to interest paid the bank. If the bank applies compound interest, the balance calculated by finite difference (recursive) formula [2] is

~ ~ ~

1 (1 ) , n nF I F n N (5)

where ~

nF is the fuzzy amount of the account

balance at time n, ~I is the fuzzy compound

interest rate. Using (3) and write

, ,( ) [ , ]n n n L n RF F F F , we get

, , 0, 0,[ , ] [ , ][1 ,1 ] ,nn L n R L R L RF F F F I I

with [0,1] . Rewrite all fuzzy numbers in the form of (4), such as 1 2( , , )F F F F

and 1 2( , , )I I I I , the above formula becomes

, ,

1 1

2 2

[ , ]

[( ( 1) )(1 ( ( 1) )) ,

( ( 1) )(1 ( ( 1) )) ].

n n L n R

n

n

F F F

F F I I

F F F I I

Now we consider that the balance can decrease due to effects of macroeconomics factors, the inflation J , the tax portion p on the interest rate and the CPI H . To maintain the balance, there are deposits nb added during the life of the account. If money income stays the same, but the price of most goods goes up, the effective purchasing power of that income falls. Falling purchasing power can thus be part of inflation. Let us denote real purchasing power with

nn

FYH

so the finite difference will be ~ ~ ~

~

1 ~ ~

~ ~~

~ ~ ~

(1 (1 ) ) (1 )

(1 )

(1 (1 ) ) (1 ) , .(1 ) (1 )

nn

n

p I F b pYH J

p I b pY n NJ H J

Using the same procedure explained previously, we get

Page 188: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

181

, ,

0, 0,

11

0

[ , ]

1 (1 ) 1 (1 ) [ , ] [ , ]1 1

(1 ) (1 ) [ , ](1 ) (1 )

1 (1 ) 1 (1 ) [ , ] .1 1

n l n r

nl rl r

r l

l r

r r l l

nn sl r

s r l

Y Y

p I p I Y YJ J

p b p bH J H J

p I p IJ J

(6) Detailed steps in the above process can be found in [2]. In the next section we implement the above formula in the design of important plans of an individual life in the future. IV. IMPLEMENTING SCENARIO A scenario is designed to see the importance of the finite difference formula (6) in the previous section. The chosen value of α is 0.8. Data is taken from the main salary and additional income of a fresh-graduate employee of a big company owned by government. His take-home salary is IDR 4,672,000 per month. In the month of his religious celebration day, Eid Fitr’ for his case, there is one month salary added to his account. After 3 (three) years since his first day at the office, there will be a one-year bonus paid by the employer on every June. He commits to himself to save around IDR 2,700,000 per month from the main salary. If the bonuses are received, he will save the amount of 50% to 100% of the total bonuses. With this financial condition, he plans to spend his money to buy a house with price IDR 350,000,000 and the expense of his wedding party worth around IDR 100,000,000 to 200,000,000. The fuzzy finite difference formula (6) is used to set time when he could take a mortgage for buying a house and the time when he will hold a quite big wedding party without needs of financial help from his parents and relatives. Assume that he starts to save from January 2010. Now we define parameters: (in million IDR)

nF =(2, 2.7,3) , n = 0,1,2,...

1

2

1 2

=(2.336,3, 4.672) if n=j= 50,54,56 if n=7 after 3 years

if n=j =7 after 3 years0 if

Fitr

nFitr

bb

bb b

others

where jFithr is the month of his religious celebration day. The month of jFithr is started from Sep 2010, so jFitr =9. From year 2 and so on, the month of jFitr will be 8, 8, 8, 9, 9, 9, 10, 10,10,... This arrangement agrees with Islamic calendar where the Eid Fitr’ will be coming earlier 11 days from the day in the previous day. Fuzzification of macroeconomics data as in the Appendix is given below. The interest rate of the mortgage is nI =(0.0525,0.060722,

0.0725), the CPI is H =(0,1008 ; 0,230846429 ; 0,6401) and the inflation J =(0.0214, 0.10994, 0,1214). The balances of the bank account are shown in tables below. In table 1, the tax applies on 4th-month of year 1 and first type bonus is added on 9th-month.

Yi,j Yn_Left Yn_Right

1,1 2,202,449.15 2,396,750.571,2 4,512,910.71 4,756,459.521,3 6,854,644.18 7,125,149.311,4 9,217,341.37 9,517,090.091,5 9,748,395.30 13,768,894.821,6 13,520,184.90 14,630,674.511,7 16,143,712.89 16,712,726.061,8 18,543,693.72 19,051,488.841,9 19,251,736.06 25,328,515.42

1,10 21,570,017.26 31,040,733.39Table 1: Year 1

Yi,j Yn_Left Yn_Right

4,4 113,081,653.29 120,230,577.444,5 116,592,656.94 124,114,841.364,6 128,503,202.85 159,670,382.924,7 57,764,208.52 79,167,635.884,8 55,291,707.38 84,054,477.354,9 54,541,915.90 85,561,648.29

Page 189: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

182

4,10 64,213,442.80 73,464,253.454,11 66,849,938.85 71,820,082.134,12 68,131,492.52 72,359,956.12

Table 2: year 4 The house mortgage with rates as shown in the Appendix needs IDR 100 million as a down- payment and monthly payment as much as [2,183,165.914 , 2,264,552.576] for 10 years. Notice that he needs to maintain his balance in decent amount after he takes the mortgage and pay the monthly installment. Taking it as a consideration, he takes a mortgage on 7th-month of year 4 as shown in table 2.

Yi,j Yn_Left Yn_Right8,5 242,817,812.55 271,025,964.988,6 209,654,062.28 289,218,014.158,7 78,403,973.47 123,056,558.288,8 77,465,352.27 126,858,691.378,9 92,800,394.98 111,628,988.838,10 99,131,867.71 112,174,729.188,11 103,805,191.59 116,137,939.788,12 108,243,959.32 120,921,526.889,1 112,726,452.11 125,982,250.519,2 117,309,594.02 131,209,342.399,3 122,007,626.59 136,583,243.219,4 126,826,653.75 142,103,744.289,5 131,771,322.29 147,774,853.579,6 151,538,633.97 198,691,011.269,7 152,570,638.09 243,496,494.54

Table 3: Year 8 and 9 In 6th-month of year 8, the money is enough to pay back the mortgage, so it occurs in the 7th-month. Finally, he can hold a wedding party starting on 6th-month of year 6, which is shown in table 3. V. CONCLUSION Fuzzy finite difference equation can be used to determine the time when the balance of the bank account is suitable to take a mortgage and to spend it in a wedding party. In the case observed in this research, a fresh-graduate employee could buy a house of IDR 350 million in year 4, and holding a wedding party of IDR 100 – 200 million in year 9.

REFERENCES [15]. J.J. Buckley, The fuzzy mathematics

of finance, Fuzzy Sets and Systems 21 (3) (1987) 257–273.

[16]. K.A. Chrysafis, B.K. Papadopoulos, G. Papaschinopoulos, On the fuzzy difference equations of finance, Fuzzy Sets and Systems 159 (2008), 3259–3270.

[17]. Zadeh, L.A. (1965). "Fuzzy sets", Information and Control 8 (3): 338–353.

[18]. Lee, CF, Tzeng GH and Wang SY, A new application of fuzzy set theory to the Black–Scholes option pricing model, Expert Systems with Applications, 29 (2005) 330–342.

[19]. Carlsson, C., A fuzzy approach to real option valuation, Fuzzy Sets and Systems, 139 (2003) 297–312.

[20]. Shapiro, A.F., Fuzzy logic in insurance, Insurance: Mathematics and Economics, 35 (2004) 399–424.

[21]. C. Kahraman (editor), Fuzzy Engineering Economic with Applications, Spinger-Verlaag Berlin, 2008.

[22]. W.H. Steeb, Y. Hardy, R. Stoop, The Nonlinear Workbook, Singapore (2005) 554-555.

[23]. G.J. Klir and G. Yuan, Fuzzy Sets and Fuzzy Logic, Theory and Applications, Prentice-Hall, New Jersey, 1995.

Page 190: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

183

Appendix

Bank Mortgage rates

Des 2009

Jan 2010

Feb 2010

BTN (Kredit Ringan Batara) 9.75% 10.50% 9.75% BTN (platinum) 11.25% 11.75% 11.25% BTN (Kredit Griya Utama) 12% 12% 12% BRI 12.25% 12.25% 12.25% BCA 10.05% 10.50% 10.05% BII 11.49% 11.49% 10.49% CIMB NIAGA 10.05% 10.50% 10.50% OCBC NISP 12.25% 12.25% 12% PERMATA 9.90% 12.25% 9.90% MEGA 13.99% 13.99% 13.99% MANDIRI 11.05% 11.75% 11.05% BNI 11.50% 11.50% 11.50% DANAMON 11.99% 11.99% 11.99% PANIN 10.80% 10.80% 10.80%

Month-Year Inflation CPI

April 2010 3.91 % 0.1378 March 2010 3.43 % 0.1402

February 2010 3.81 % 0.1427 January 2010 3.72 % 0.1392

December 2009 2.78 % 0.1397November 2009 2.41 % 0.141

October 2009 2.57 % 0.1461 September 2009 2.83 % 0.1525

August 2009 2.75 % 0.1646 July 2009 2.71 % 0.1668 June 2009 3.65 % 0.1665 May 2009 6.04 % 0.1703 April 2009 7.31 % 0.1801 March 2009 7.92 % 0.1836

February 2009 8.60 % 0.1819 January 2009 9.17 % 0.1837

Year Bank account Interest rates

2009

5.25% 5.75% 5.50%

6%

2010

6.25% 6.50% 6.75%

7% 7.25%

Page 191: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

184

An Implementation of Fuzzy Linear System in Economics

Novriana Sumarti and Cucu Sukaenah Industrial and Financial Mathematics Research Group, Institut Teknologi Bandung, Indonesia

Email: [email protected] ABSTRACT Many problems in mathematical models can be written in linear systems. Especially in numerical analysis, all problems in differential equations are usually rewritten in linear systems so they can be solved numerically. In modelling real world problems involving vagueness and imprecision, fuzzy approach could tolerate facts which are quite often captured in estimation values. In this paper we implement the methods of solving system equations Ax b where values of every element of matrix A and vector b are in intervals. In finding suply and demand curves based on data on Indonesia’s rice production during 1987 - 1997, we build fuzzy linear systems using a method of fuzzy linear regression on observed data. The clasic solution will not found from these linear systems. Based on a method in [1,3], a vector solution can be found using α-cuts method and Cramer’s rule so the equilibrium point between supply and demand data can be determined. KEYWORDS Linear systems; Fuzzy arithmetics; α-cuts method; Cramer’s rule; Economics; I. INTRODUCTION The fuzzy sets theory was specifically design to represent mathematically uncertainty and vagueness on quantification of real world problems and to provide formalized tools for dealing with the imprecision. Due to the main characteristics of the fuzzy system, which are its suitableness for uncertain or approximate reasoning, and its acknowledgment of decision making with estimated values under incomplete or uncertain information, it has been used in decision making process in engineering since it was proposed by Lotfi Zadeh in his paper [4]. In economics, the fuzzy sets theory was firtsly used to bridge the gap between mathematical- and language-oriented

economists by [5], where it measures some concept of inflation, unemployment, recession, market-dominating companies, money, the gross national product (GNP), the national income, investment, households, and so on. In [6], the efficiency of the fuzzy approach in economics put into an observation and the results was positive that the approach can make a significant contribution to the foundations of econmics. Some other works in implementing fuzzy analysis in finance and economics can be found in [2,7,8]. In this paper, we explain concepts of fuzzy linear system and linear regression in section II. An implementation of concepts in the preceeding section is available in section III where the demand and supply curves of data on Indonesia’s rice production during 1987 – 1997 are built. In the process of finding their equilibrium point, it occurs that the classic solution cannot be found, so it needs the method of vector solution which is expained in section IV. Finally, the chosen equilibrium point and its interpretation in economics are explained in section V. II. FUZZY LINEAR SYSTEM AND REGRESSION METHOD Consider a fuzzy linear system of equations of the form:

Ax b (1) where the elements ijA and jB , i,j = 1, . . . , n, of the matrix A and vector b respectively are triangular fuzzy numbers. A fuzzy triangular number

1 2 3( , , )A a a a can described by its membership function

Page 192: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

185

11 2

2 1

32 3

3 2

,

( ) ,

0, others.

A

x a a x aa aa xx a x aa a

(2)

It can be also described by its α-cuts method,

( ) [ ( ), ( )]L RA A A , (3)

where ( )LA and ( )RA are functions of α [0, 1],

1 2 1

3 2 3

( ) ( ),( ) ( ).

L

R

A a a aA a a a

(4)

A α 1a

( )LA 2a

( )RA 3a x

Figure 1: Description of a fuzzy number A Denote [ ( ), ( )]L RB B B another fuzzy number. The membership function of numbers resulted from arithmetic operations between those fuzzy numbers used in this paper are determined by a method [2,9,10 ] equivalent to the extention principle method, which is written below.

( ) ( ) ( ( ) ( ), ( ) ( ))L L R RA B A B A B ,

( ) ( ) ( ( ) ( ), ( ) ( ))L L R RA B A B A B ,

( ) ( ) [min ( ) ( ), ( ) ( ), ( ) ( ), ( ) ( ),max ( ) ( ), ( ) ( ), ( ) ( ), ( ) ( )],

L L L R

R L R R L L

L R R L R R

A B A B A BA B A B A BA B A B A B

If 0 ( )b for all α, then ( ) 1( )

( ) ( )A AB B

.

Now we discuss fuzzy linear regression in order to analyze the linear relationship between variables, in this case, Y and X , in equation

0 1Y A A X (5) where parameters , 0,1iA i are estimated from the data. Here we need two linear equations of the above form. In the first equation, Y and X are respectively the quantities of the supply and prices of a good. In the second equation, Y and X are respectively the quantities of the demand and prices of a good. At the end, we will find the equilibrium point between the quantities of suply and demand. According to [10], parameters in (5) are found by minimizing the fuzzy least square, that is the total square error of the output. The objective function to be minimized is

ˆ ˆ( )R L X XJ Y Y dM dR (6)

where ˆ ˆ,L RY Y are the estimator of ,L RY Y .

Quantities XM and XR are midpoint and radius from the coressponding fuzzy number X , where

,2 2

L R R LX X

X X X XM R

.

The constraint of the optimization problem (6) is below

0 1 1

0 1 1

j j j

j j j

X X YA A A

X X YA A A

M M M R R M

R M R R M R

where ( , ), 1, ,j jX Y j N are observed data samples. Detailed explaination about the above method can be found in [10]. To solve the above problem, there are many optimization methods available. However, in this paper we use the simulated annealing method which is explained in detail and implemented on another optimization problem in [11]. III. SOLVING FUZZY LINEAR SYSTEM CLASICALLY Data in the Appendix taken from Statistik Bulog 1998 are fuzzificated. An example of the resulted fuzzy numbers of the rice

Page 193: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

186

production can be found in the Appendix. The numbers are also scaled to simplify the computation. The results are put into the regression process and then we determine the equilibrium point of the demand and supply curves. The two resulted curves of the form (5) are rewritten into linear equation system below

1 2

1 2

( 0.6125, 0.5200, 0.4275) (0.3817,0.4559,0.5301)

( 0.3157, 0.1615, 0.0073) (0.6833,0.7357,0.7881)

X X

X X

(7)

where 1 11 12 13( , , )X x x x is the quantity of the rice production in the situation in which the suply and demand are the same, and 2 21 22 23( , , )X x x x is the price in that situation. Here we need the condition

1 2 3, 1,2i i ix x x i . Writing in its α-cuts representation (3) and using arithmetic operations in section II, we find that 11 1.0026,x 13 0.7926,x

21 0.6140,x 23 1.0138x . Here

11 13x x which is not acceptable. So the clasic solution of fuzzy linear system is not available. IV. VECTOR SOLUTION METHOD Buckley [1] proposed a new method called the vector solution method. First we define new linear equation system of the same dimension coming from restasting all fuzzy numbers in the system (7) into the corresponding forms of (4). Notice that system (7) involves a fuzzy matrix A of 2x2 dimension and a fuzzy vector B of 2 dimension. Using (4), denote

( ) [ ( ), ( )]ij ijL ijRA a a

an ij-th element of matrix A . Here 1 2 1

3 2 3

( ) ( ),

( ) ( ).ijL ij ij ij

ijR ij ij ij

a a a a

a a a a

Define new elements for i,j = 1,2 ( ) [ ( ) ( )]ijL ijR ijL kijA a a a and

( ) [ ( ) ( )]ijL ijR ijL kijB b b b ,

where [0,1]k , k=1,2,3,4 for A and

k=5,6 for B . If dimension of A is n x n, then k = 1,2,..., 2n n . The system (7) becomes

11 12 11

2221 22

A A Bxx BA A

(8) Use Cramer’s rule to solve system (8) . The solutions of (8) are functions in

[0,1] . The plot of 1x and 2x for all values of α can be found in figure 1 and 2 respectively.

Figure 1: Quantity of product on equilibrium point

Figure 2: Price on equilibrium point

After scaling back to the original magnitude and choose α = 1, it conclude that the quilibrium point occurs at the production of 3.2083 1010 kg and the price of IDR 378.1. V. CONCLUSIONS The classic solution of fuzzy linear system can be non exist. The alternative method is calculating a vector solution by using α-

2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5

x 1010

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

alph

a-cu

ts

Quantity of Product

0 200 400 600 800 1000 1200 1400 1600 18000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Price

alph

a-cu

ts

Page 194: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

187

cuts method and Cramer’s rule. Finding the equilibrium point of the suply and demand curves on the data of rice production during 1987 – 1997 is a good example of the implementation of the alternative method. REFERENCES [24]. J.J. Buckley, Y. Qu, Solving

systems of C. Kahraman (editor), Fuzzy Engineering Economic with Applications, Spinger-Verlaag Berlin, 2008.

[25]. C. Ponsard, Fuzzy mathematical models in economics, Fuzzy Sets and Systems 28 (3), 1988, 273-283.

[26]. G.J. Klir and G. Yuan, Fuzzy Sets and Fuzzy Logic, Theory and Applications, Prentice-Hall, New Jersey, 1995.

[27]. A. Bisserier, R. Boukezzoula, S. Galichet, An interval approach for fuzzy linear regression with imprecise data,IFSA-EUSFLAT 2009, 1305-1310.

[28]. P. Luic, D. Teodorovic, Simulated annealing for the multi-objective aircrew rostering problem, Transportation Research Part A 33(1) 1999, 19-45.

[29]. linear fuzzy equations, Fuzzy Sets and Systems 43 (1991) 33–43.

[30]. J.J. Buckley, The fuzzy mathematics of finance, Fuzzy Sets and Systems 21 (3) (1987) 257–273.

[31]. Muzzioli S. and Reynaerts H., Fuzzy Linear System of the form , Fuzzy set and systems 157 (2006) 939-951.

[32]. Zadeh, L.A. (1965). "Fuzzy sets", Information and Control 8 (3): 338–353.

[33]. A. Pfeilsticker, The systems approach and fuzzy set theory bridging the gap between mathematical and language-oriented economists, Fuzzy Sets and Systems 6 (3) 1981, 209-23.

[34]. M. Fedrizzi, M. Fedrizzi, W. Ostasiewicz, Towards fuzzy modelling in economics, Fuzzy Sets and Systems 54(3) 1993, 259-268

APPENDIX

Indonesia’s rice production and consumption

1987 - 1997

Fuzzification on the production data

Production (kg) Price (IDR) (2.259e+010,2.404e+010,

2.552e+010) (140.6, 175,

209.4) (2.404e+010,2.552e+010,

2.698e+010) (175, 209.4,

243.8) (2.552e+010,2.698e+010,

2.843e+010) (175, 209.4,

243.8) (2.552e+010,2.698e+010,

2.843e+010) (209.4, 243.8,

278.1) (2.552e+010,2.698e+010,

2.843e+010) (243.8, 278.1,

312.5) (2.698e+010,2.843e+010,

2.991e+010) (243.8, 278.1,

312.5) (2.698e+010,2.843e+010,

2.991e+010) (312.5, 346.9,

381.3) (2.698e+010,2.843e+010,

2.991e+010) (312.5, 346.9,

381.3) (3.284e+010,3.43e+010,

3.577e+010) (312.5, 346.9,

381.3) (3.43e+010,3.577e+010,

3.723e+010) (381.3, 415.6,

450) (3.43e+010,3.577e+010,

3.723e+010) (415.6, 450,

484.4)

Year Production

(Kg) Consumption

(Kg) Price 1987 24047398200 23932084680 175 1988 25006390800 24650879870 210 1989 26834862600 27670245620 210 1990 27107762400 25938534960 250 1991 26811343200 25821981000 270 1992 28945521000 29962225150 295 1993 28909125000 27245040800 330 1994 27983538000 26833493200 340 1995 34823747700 27566148280 360 1996 35767191600 31328276850 400 1997 35438970000 27721333500 450

Page 195: Proc.CIAM2010[1]

188

Compact Finite Difference Method for Solving Discrete Boltzmann

Equation

PRANOWO1 & A. GATOT BINTORO2

1 Senior Lecturer of Informatics Engineering, Atma Jaya University, Yogyakarta, Indonesia. (Email: [email protected])

2 Lecturer of Industrial Engineering Atma Jaya University, Yogyakarta, Indonesia ABSTRACT Fourth compact finite difference (FD) method for solving two dimensional Discrete Boltzmann Equation (DBE) for simulation of fluid flows is proposed in this paper. The solution procedure is carried out in Eulerian framework. BGK (Bhatnagar–Gross–Krook) scheme is adopted to approximate the collison term. The convective terms are discretized using 4th compact finite difference method to improve the accuracy and stability. Te semidiscrete equations are updated using 4th order explicit Runge-Kutta method. Preliminary results of the method applied on the Taylor-Green vortex flows benchmark are presented. We compared the numerical results with other numerical results, i.e. explicit 2nd and 4th FD, and exact solutions. The comparisons showed excellent agreement. KEYWORDS Compact finite difference; Boltzmann; BGK; ; Taylor vortex I.INTRODUCTION In the last decade the lattice-Boltzmann method (LBM) has attracted much attention in the simulation of fluid dynamics problems. Unlike conventional computational fluid dynamics methods, which discretize the macroscopic governing equations directly, the LBM method solves the gas kinetic equation at the mesoscopic scale, i.e. the discrete Boltzmann equation with the Bhatnagar–Gross–Krook (BGK) relaxation for the collision operator. The BGK relaxation process allows the recovery of Navier Stokes equations through Chapman Enskog expansion for low Knudsen number. In the gas kinetic theory, the evolution of the single-particle density distribution function ex,,tf which represents the probability

density of a particle with unit mass moving with velocity e at point x at time t, is governed by the Boltzmann equation:

eqfff

t

f

e (1)

where eqf is the equilibrium distribution and is relaxation time. After discretizing the velocity space e into various directions, the 2-D Boltzmann equation for the velocity distributon function if may be written as discrete Boltzmann equation.

eqii

iii ff

ft

f

e (2)

The discrete velocity ie is expressed as:

9,8,7,6 ,4

1 , sin,cos2

5,4,3,2 ,4

1 , sin,cos

1 , 0,0

ii

ii

i

iii

iiii

e

8

0iif (3a)

8

0iijij efu ; (3b)

22..

2

4

2

2

u

c

ue

c

uef

s

i

s

ii

eqi

with 91,94 54321 ,

and 3619876 . The

pressure can be calculated from 3scp with

of sound velocity 31sc in lattice unit

Page 196: Proc.CIAM2010[1]

189

and the kinematic viscosity of fluid is .3

. In Lattice Boltzmann method eq. (2) is solved in the form of

tftf

tftf

eqii

iii

,,1,1,

xx

xex

(4)

using 1 tyx . The use of unit square mesh elements is restrictive. Several extension to the LBM have been developed to overcome this restriction. Reference [1] used finite difference method (FDM) with 2nd upwind discretization for convective terms. In ref. [1] the FDM is extended to curvilinear coordinates with non-uniform grids. Unfortunately the 2nd upwind makes the stencil longer, so it is not easy to handle the boundary condition. Reference [2] used FDM on non-uniform grids. They used implicit temporal discretization to improve the stability. Many modified FDM were proposed to improve the stability and numerical accuracy. Upwind FDM suffers from large dissipation error and standard 2nd suffers from large dispersion error. Spectral method [5] offers exact differentiation but suffers from low flexibility in treatment of boundary condition. In this paper, the 4th order compact FDM is proposed to discretize the convective terms of eq. (2). The method is preffered due to high accuracy and flexibility [4]. For improving the stability, the 2nd explicit Runge Kutta method is used to integrate the semi-discrete equation.

II. DISCRETIZATION The linear convective terms of equations (2) are discretized using 4th compact finite difference method:

x

ff

x

f

x

f

x

f

lkilk

lk

i

lk

i

lk

i

2

61

32

61

,,1

,1,,1 (5a)

y

ff

y

f

y

f

y

f

lkilki

lk

i

lk

i

lk

i

2

61

32

61

1,1,

1,,1, (5b)

After discretizing the convective terms using 4th compact FD, we obtain semi discrete equation of (2).

i

eqii

iii fL

fff

t

f

e (6)

Then the time update is performed using classical 4th explicit Runge Kutta method. III. ANALYSIS OF DISCRETIZATION The analysis of spatial and temporal discretization are given in this section. For simplicity, linear advective equation is takes as example.

0,

20 ,0

ikxexu

,xx

ua

t

u

(7)

Figure 2. Finite Difference Stencil

Figure 1. Velocities in 2-D Lattice Boltzmann model (D2Q9)

Page 197: Proc.CIAM2010[1]

190

where a is velocity constant. The exact solution of eq. (7) is easily computed, and we have , tkxietxu . (8) where k is the wave number and is angular frequency. From the exact solution, we obtained the exact dispersion relation:

ak . By subtituting the local solution ,

* txkiii i

etx Uu to eq. (6), we obtain the numerical dispersion relation [5]:

cos

421

sin1.5*

xk

xkixik (9)

From the eq. (9), it can be seen that numerical dispersion relation of 4th compact FD has no imaginair components, so it can be concluded that 4th compact FD is conservative and has no dissipation error. For acceptable dispersion error 2* 10 xkxk , we found that

1.0893* xk . Therefore, the spatial grid points per wavelength (PPW) is

7683.50893.12 PPW . From the right term of eq. (9) the eigenvalues of 4th compact FD can be calculated and they are purely imaginary, covering xaxa 732.1,732.1 The stability of 2nd explicit Runge-Kutta scheme can be analyzed by considering eq. (6),

ii fLt

f

(10)

where L is the residual terms which contains the spatial terms of the governing equations. By subtituting kx

ii etff into eq. (6), we obtain

ft

f i

(11)

Where f is the Fourier coefficient, 1i , and is complex number. The left terms of eq. (11) is expanded by using 4th explicit Runge-Kutta scheme to obtain the amplification factor:

21

211 tt

f

fG

ni

ni

(12)

The stability condition requires that the amplification factor must be bounded, 1G (13) The stability region in complex plane can be seen in figure 4. The eigenvalues of 4th RK in imaginary axis are covering

Figure 3. Dispersion error

Figure 4. The stability region in complex plane

Page 198: Proc.CIAM2010[1]

191

t 828.2,828.2 , the stability condition of the fully discrete equation is

1.6328

732.1828.2

CFLx

taCFL (14)

CFL is Courant Friedrichs Lewy number. IV. NUMERICAL RESULTS The performance of numerical is tested by application to 2 benchmark problems. The first problem to be solved is 1-D linear convection problem [3]:

0,

20 ,02

sin xexu

,xx

u

t

u

with periodic boundary condition The exact solution to above equation is a right-moving wave of the form: txetxu 2sin, and 0.0245 0.0491,0.0982, xxx .

We compare the results with explicit 2nd and 4th order FD. We take constant 35 et From figure 5, we can see that 4th compact FD has the lowest error among the r methods. The accuracy orders are in a good agreement with theoritical results. The second problem to be solved is 2-D decaying Taylor –Green vortex flow problem [5]. The Taylor-Green vortex flow has the following analytic solutions to incompressible Navier-Stokes equation in 2-D:

1

2sin2cos4

,,

sincos,,

sincos,,

2

2

2

220

0

0

tkk

yy

xxx

tkkxy

y

xy

tkkyxx

yx

yx

yx

e

ykk

kxk

Utyxu

exkykUk

ktyxu

eykxkUtyxu

where 0U is initial velocity amplitude, xk and

yk are the wave number in x dan y direction. We use 2-D system of size 32 32 with periodic boundary condition in both directions. The simulation parameters are 01.00 U ,

2 yx kk , 005.0t , and 0018.0 . The initial condition of velocity distributon

Figure 5. Convergence History for linear convection

problem

Figure 6. Accuracy Order

Figure 7.Velocity fields at t=2

Page 199: Proc.CIAM2010[1]

192

function if actually are unknown, it is not easy how to generate consistent initial condition of if . Research for generating

consistent initial condition of if is still in progress [6]. In this paper, we use a simple approach, we use the equilibrium distribution function eq

if to intialize if .

Figure 8. Density distribution at t=2.

Figure 7 and 8 show the computed results for velocity field and density at 2t . Numerical and exact solutions of vertical velocity for t = 2 and t = 150 are compared in figure 8, showing excellent agreement. We compared the vertical velocity error of 4th compact FD with explicit 2nd and 4th FD, the comparisons show that 4th compact FD much more accurate than 2nd and slightly more

accurate than explicit 4th FD. Figure 9a and 9b show the comparisons. Figure 10 shows the evolution of averaged error for 4th compact FD, 2nd and explicit 4th FD schemes. It can be seen that the averaged error of 4th compact FD and explicit 4th FD are almost equal and the the averaged error of 2nd

FD scheme is higher than others.

Figure 10. Convergence history of Taylor-Green vortex

problem

Figure 8. Comparison of vertical velocity at t= 2 and

t=150

Figure 9a.Vertical velocity error at t=2 and y=3.043

Figure 9b.Vertical velocity error at t=150 and y= 3.043

Page 200: Proc.CIAM2010[1]

193

V. CONCLUSIONS In this paper, we have presented a 4th compact finite difference method for solving two dimensional Discrete Boltzmann Equation. The proposed method has been verified for the 1-D convective equations and Taylor-Green vortex flows benchmark. The excellent agreement with exact solution and results of 2nd and explicit 4th FD shows the excellent accuracy and stability of the proposed method. ACKNOWLEDGEMENTS The work described in this paper was fully supported by a grant from LPPM of Atma Jaya Yogyakarta University.. REFERENCES [1]. Mei, R, and Shyy, W. (1998). "On the

Finite Difference-Based Lattice Boltzmann Method in Curvilinear Coordinates," Journal of Computational Physiscs, 143, pp 426-448.

[2]. Toelke, J. et al.(1998). "Implicit discretization and non-uniform mesh refinement approaches for FD discretizations of LBGK models," Int. Journal of Modern Physics C, 9(8), pp 1143-1157.

[3]. Hesthaven, JS, Gottlieb, S, and Gottlieb D

(2004). “Spectral Methods for Time Dependent Problems,” Cambrige University Press, UK.

[4]. Lele SK (1992). "Compact Finite Difference Schemes with Spectral-like Resolutions," Journal of Computational Physiscs, 103, pp 16-42.

[5]. Wilson, RV, and Demuren, A O (2004). “Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows,” NASA/CR-1998-206922, ICASE Report No. 98-13, Hampton, USA.

[6]. Mei, R et al. (2006). "Consistent initial conditions for Lattice Boltzmann simulations," Computers & Fluids, 35, pp 855-682.

Page 201: Proc.CIAM2010[1]

194

Natural convection heat transfer with an Al2O3 nanofluids at low Rayleigh number

Zailan Siri(1), Ishak Hashim(2) and Rozaini Roslan(3 ) (1)Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur, Malaysia

(Email: [email protected]) (2) Centre for Modelling & Data Analysis, School of Mathematical Sciences, Universiti Kebangsaan Malaysia,

43600 Bangi, Selangor, Malaysia (Email: [email protected])

(3)Centre for Research in Applied Mathematics, Faculty of Science, Arts & Heritage Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Johor, Malaysia

ABSTRACT Natural convection flow for variable sidewalls temperatures in a square enclosure heated from the side is investigated. The left wall has a variable hot temperature and the right wall has a constant cold temperature. The top and bottom horizontal straight walls are kept adiabatic. The fluid in the enclosure is a water-based nanofluids containing Al2O3 nanoparticles. To solve the nonlinear governing differential equations a finite difference method is applied. It is shown that the addition of nanoparticles has the effect of decreasing the maximum absolute value of the stream function and therefore natural convection decreases. KEYWORDS Natural convection; variable wall temperatures; nanofluids; Al2O3 nanoparticles; finite difference method. I.INTRODUCTION Natural convection flow in a square enclosure is an important phenomenon in engineering systems due to its wide applications in electronic cooling and solar energy collections [17]. Choi [3] reported that to enhance heat transfer is by using solid particles in the base fluid (i.e nanofluids) in the range of sizes 10-50nm. Nanofluids poses as promising alternative for the heat transfer enhancement. However, the viscosity of nanofluids was found increase abnormally that will suppress the buoyancy current [22]. The classical model of Maxwell-Garnett is the first model to explain the thermal conductivity behavior of the nano-scale solid particles in the base fluid but the model does not consider an important mechanisms for heat transfer in nanofluids such as Brownian motion and effect of temperature as well as nanoparticle size. Chon et. al [5] was the first conducted experimental of Al2O3 nanofluids over a wide range of temperature from 21-71oC and

nanoparticles size ranging from 11-150nm. In addition, they was derived a correlation of the thermal conductivity ratio as a function of nanoparticle size and temperature. They conclude that a higher temperature and smaller nanoparticle size increase a thermal conductivity nanofluids. The viscosity properties of nanofluids in the Brinkman model was shown to underestimate the effective viscosity of nanofluids [15,19]. The Brinkman model also does not consider the nanofluids temperature and nanoparticle size. Beside, the Brinkman model was derived for larger particles sizes compared to the size of nanoparticles. Therefore, Nguyen et. al [15] have responded with extensive experiment under a wide range temperatures and several nanoparticles size to determine the effective viscosity of nanofluids. They demonstrated that viscosity drop sharply with temperature, especially for high concentration of nanoparticles. The smaller nanoparticles size decrease the viscosity nanofluids. They also derived a viscosity ratio correlation. Abu-Nada et. al [1] have applied the correlation of thermal conductivity of Chon [5] and the viscosity data given by Nguyen et. al [15] to predict the heat transfer behaviour in a differentially heated rectangular enclosure. They observed that heat transfer was reduced by increasing the volume fraction of nanaparticle above 5% at a high Rayleigh number. This new finding corrects a finding of the effect nanoparticles concentration on heat transfer enhancement reported by [11,20,18,2,14,16,6, 12]. However, Abu-Nada et. al [1] tested for a fixed nanoparticles size and in the rectangular enclosure only.

Page 202: Proc.CIAM2010[1]

195

The purpose of this paper is to analyze numerically the heat transfer behaviour in a square enclosure filled nanofluids with variable sidewalls temperatures. Al2O3 nanoparticle will be used in this study. II. MATHEMATICAL FORMULATION Fig. 1 shows a schematic diagram of the differentially heated square enclosure. The left enclosure has a variable hot temperature (Th) and the right enclosure has a constant cold temperature (Tc). The top and bottom horizontal straight walls are kept adiabatic. The fluid in the enclosure is a water-base nanofluids containing Al2O3 nanoparticles. The continuity, momentum, and energy equations for the laminar and steady state natural convection in the two-dimensional enclosure can be written in the dimensional form as follows Ghasemi and Aminossadati [8,9]

Figure 1. Physical model

2 2

2 2

2 2

2 2

0 (1)

1 (2)

1

1 (3)

nf

nf nf

nf

nf nf

cnfnf

u v

x y

u u p u uu v

x y x x y

u u p v vu v

x y y x y

g T T

2 2

2 2 (4)nf

T T T Tu v

x y x y

where,

(1 )

( )

( ) (1 )( ) ( )

( ) (1 )( ) ( ) .

nf f s

nfnf

p nf

p nf p f p s

nf f s

k

c

c c c

The dynamic viscosity ratio of water-Al2O3 nanofluids in the ambient condition was derived by Nguyen et. al [15] as follow:

0.148 20.904 and 1 0.025 0.015nf nf

f f

e

for 47 and 36 nm particle-size respectively. While the thermal conductivity ratio of water-Al2O3 nanofluids calculated by Chon et. al [5] is

0.369 0.7476

0.764 0.9955 1.23211 64.7 Pr Renf f f

f p p

k d k

k d k

with 2Pr and Re3

f f b

f f f f

k T

l

, and the

23

247.8 1405

symbol 1.3807 10 / , 0.17 ,

0.384 and 2.414 10 10b f

Tf f

k J K l nm

d nm u

is the Boltzmann constant, mean path of fluid particles, molecular diameter of water and the temperature dependence of water viscosity respectively. Thermo-physical properties of water and Al2O3 are as Abu-Nada et. al [1] can be found in Table 1. Table 1. Thermo-physical properties of water and Al2O3

Physical properties Water Al2O3 Cp (J/kgK) 4179 765

ρ (kg/m3) 997.1 3970 k (Wm-1K-1) 0.613 25 β × 10-5 (1/K) 21 0.85

The appropriate initial and boundary conditions are:

0 0

0

0 : 0, , 0 , 0 , (5)

0 : 0, 0, 0 & , (6)

0, ( )( ) , 0, (7)

0, ( )( ) , .

c

h

c c

t u v x L y L

t u v x Lx

xu v y

Lx

u v y LL

(8)

Introducing the stream function and vorticity that defined as:

Page 203: Proc.CIAM2010[1]

196

, , (9)v uu v

y x x y

The stream function (9) satisfies the continuity Eq. (1). To find the vorticity equation is by eliminating the pressure between the two momentum equations, i.e by taking y-derivative of Eq. (2) and substracting the x-derivative of Eq. (3). This gives:

2 2

2 2

2 2

2 2

2 2

2 2

1 (10)

(11)

(12)

nf

nf

nfnf

nf

y x x y x y

Tg

x

T T T T

y x x y x y

x y

Now we introduce the following non-dimensional variables

2

0 0 0

0 0 0 0

, , , , ,

, , , ,

(13)

nf nf nf

c

h c

x y l vlX Y V

l l f f f

kulU k

f kf f f

T T

T T

where the subscript "0" stands for the reference temperature which is considered at the ambient condition. By using the dimensionless parameters in Eq. (13) together with Eq. (9)-(12) then, we can rewrite as:

Where,

3

0

Prh c

g lRa T T

f

is the Rayleigh number,

and 0

0

Pr f

f

is the Prandtl number.

The boundary conditions for variable sidewalls temperature [10] are:

2

2

2

2

2

2

0 for all solid boundaries (13)1and 1 on the left hot wall (14)21and on the right cold wall (15) 2

and 0 on the adiY

XX

XX

Y

abatic walls (16)

Once we know the temperature we can obtain the rate of heat transfer from the hot wall, which are given in terms of the average Nusselt number as:

4

1

. (17)l nf

lf

kNu dY

k X

III. SOLUTION APPROACH We employed finite difference method to solve Eq. (10)-(12) subject to the boundary conditions (13)-(16). The central difference method is applied for discretizing the equations. Next, the solution of the algebraic equations was performed using Gauss-Seidel iteration. The unknowns Θ, Ψ and Ω are calculated until the following convergence criterium is fulfilled:

1, ,

,1

,,

, (18)

n ni j i j

i j

ni j

i j

where ξ is either Θ, Ψ and Ω, n represents the iteration number and is the convergence criterium. In this study, the convergence criterion is set at 610 . Different mesh sizes from 11×11 to 101×101 are used to carry out this study. It is clear from the grid independence test as shown in Fig. 2 that a 41×41 uniform grid is enough to investigate the current problem.

2 2

2 2

2 2

2 2

2 2

2 2

Pr

1

Pr 1 (10)

(11)( )1

( )

p

f

p

f

p p

p f

Y X X Y X Y

RaX

kcY X X Y X Y

c

X Y

(12)

Page 204: Proc.CIAM2010[1]

197

Figure 2. Nu for different grid system

IV. RESULTS AND DISCUSSIONS A numerical analysis has been conducted to investigate the effect of Al2O3-water nanofluid filling a two-dimensional square enclosure. We present numerical results for streamlines and isotherms for a value of the 310Ra and 0% 5% . We limited our investigation on volume fraction of 0% 5% as explained by [7,22,4,23,13,24]. From their investigations, nanofluids were found to exhibit substantially higher thermal properties particularly thermal conductivity even when the concentrations of suspended nanoparticles are very low i.e

5% The flow and energy transport in the enclosure for the water-based Al2O3 nanofluid in the case of variable sidewalls temperature with

310Ra is shown in Fig. 3 for various solid volume fractions. The flow is single-cellular and the shape of the main cell is circular. The addition of nanoparticle affects the diameter of the cell. In all cases, the flow rotates in clockwise direction. As shown by Fig. 3, the maximum absolute value of the stream function, max

decreases as volume fractions increases, therefore natural convection decreases. Furthermore the stream function decreases which means natural convection is weaken. V. CONCLUSIONS The problem of natural convection heat transfer in a square enclosure filled with Al2O3 nanofluids has been studied numerically. Various solid volume fractions at low Rayleigh numbers have been considered for a variable sidewalls temperature. The flow and temperature fields as well as the heat transfer rate has been analyzed. We can conclude that heat transfer within the enclosure were found to depend upon, volume fraction of nanoparticles. The theoretical prediction in this

paper is hoped to be a useful guide for the experimentalists to study the natural convection heat transfer in square enclosure filled with nanofluids. (a)

(b)

(c)

Figure 3. Contour plots of the stream function (left) and

isotherm (right) for (a) pure water (b) 0.04 and (c) 0.05

ACKNOWLEDGEMENTS The authors would like to acknowledge the financial support received from the Grant UKM-GUP-BTT-07-25-173 and Univeristy of Malaya. REFERENCES [7]. Abu-Nada, E, Masoud, Z, Oztop, HF, and

Campo, A (2009). "Effect of nanofluid variable properties on natural convection in enclosures," Int J Thermal Sci in press.

[8]. Aminossadati, SM, and Ghasemi, B (2009). "Natural convection cooling of a localised heat source at the bottom of a nanofluid-filled enclosure," Eur J Mech. B/Fluids Vol 28 pp 630-640.

Page 205: Proc.CIAM2010[1]

198

[9]. Choi, SUS (1995). "Enhancing thermal conductivity of fluids with nanoparticles," ASME Fluids Eng Div, Vol 231, pp 99-105

[10]. Choi, SUS, Zhang, ZG, Yu, W, Lockwood, FE, and Grulke, EA (2001). "Anomalous thermal conductivity enhancement in nanotube suspensions," Appl Phys Lett Vol 79 pp 2252-2254.

[11]. Chon, CH, Kihm, KD, Lee, SP, and Choi, SUS (2005). "Empirical correlation finding the role of temperature and particle size for nanofluid (Al2O3) thermal conductivity enhancement," Applied Physics Letters, Vol 87, pp 1-3.

[12]. Das, MK, and Ohal, PS (2009). "Natural convection heat transfer augmentation in a partially heated and partially cooled square cavity utilizing nanofluids," Int J Numer Meth Heat Fluid Flow Vol 19 pp 411-431.

[13]. Eastman, JA, Choi, SUS, Li, S, and Thompson, LJ (1997). "Enhanced thermal conductivity through the development of nanofluids," Proceedings of the Symposium on Nanophase and Nanocomposite Materials II Vol 457 pp 3-11.

[14]. Ghasemi, B, and Aminossadati, SM (2009). "Natural Convection Heat Transfer in an Inclined Enclosure Filled with a Water-Cuo Nanofluid, " Numerical Heat Transfer, Part A Vol 59 pp 807-823.

[15]. Ghasemi, B, and Aminossadati, SM (2010). "Periodic natural convection in a nanofluid-filled enclosure with oscillating heat flux," Int J Thermal Sci Vol 49 pp 1-9.

[16]. Kandaswamy, P, Eswaramurthi, M (2008). "Density maximum effect on bouyancy-driven convection of water in a porous cavity with variable side wall temperatures," Int J Heat Mass Transfer Vol 51 pp 1955-1961.

[17]. Khanafer, K, Vafai, K, and Lightstone, M (2003). "Buoyancy-driven heat transfer enhancement in a two-dimensional enclosure utilizing nanofluids," Int J Heat Mass Transfer Vol 46 pp 3639-3653.

[18]. Kumar, S, Prasad, SK, and Banerjee, J (2009). "Analysis of flow and thermal field in nanofluid using a single phase thermal dispersion model," Appl Math Model in press.

[19]. Li, CH, and Peterson, GP (2006). "Experimental investigation of temperature and volume fraction variations on the effective thermal conductivity of

nanoparticle suspensions (nanofluids)," J Appl Phys Vol 99 pp 084314.

[20]. Muthtamilselvan, M, Kandaswamy, P, and Lee, J (2009). "Heat transfer enhancement of copper-water nanofluids in a lid-driven enclosure," Commun Nonlinear Sci Numer Simulat in press.

[21]. Nguyen, CT, Desgranges, F, Roy, G, Galanis, N, Mare, T, Boucher, S, and Angue Minsta, H (2007). "Temperature and particle-size dependent viscosity data for water-based nanofluids hysteresis phenomenon," Int J Heat Fluid Flow Vol 28 pp 1492-1506.

[22]. Ogut, EB (2009). "Natural convection of water-based nanofluids in an inclined enclosure with a heat source," Int J Thermal Sci in press.

[23]. Ostrach, S (1988). "Natural convection in enclosures," J Heat Transfer, Vol 110, pp 1175-1190.

[24]. Oztop, HF, and Abu-Nada, E (2008). "Numerical study of natural convection in partially heated rectangular enclosures filled with nanofluids," Int J Heat Fluid Flow Vol 29 pp 1326-1336.

[25]. Polidori, G, Fohanno, S, and Nguyen, CT (2007). "A note on heat transfer modelling of newtonian nanofluids in laminar free convection," Int J Thermal Sci Vol 46 pp 739-744.

[26]. Tiwari, RK, and Das, MK (2007). "Heat transfer augmentation in a two-sided lid-driven differentially heated square cavity utilizing nanofluids," Int J Heat Mass Transfer Vol 50 pp 2002-2018.

[27]. Wang, X, Xu, X, and Choi, SUS (1994) "Thermal conductivity of nanoparticle-fluid mixture, " J Thermophysics Heat Transfer, Vol 13, pp 474480.

[28]. Wang, X, Xu, X, and Choi, SUS (1999). "Thermal conductivity of nanoparticle--fluids mixture," J Thermophys Heat Transfer Vol 13 pp 474-480.

[29]. Xuan, Y, Li, Q, and Yu, W (2003). "Aggregation structure and thermal conductivity of nanofluids," AIChE J Vol 49 pp 1038-1043.

[30]. Zhu, H, Zhang, C, Liu, S, Tang, Y, and Yin, Y (2006). "Effects of nanoparticles clustering and alignment on thermal conductivities of Fe3O4 aqueous nanofluids," Appl Phys Lett Vol 89 pp 023123-1-0231123-3.

Page 206: Proc.CIAM2010[1]

199

NOMENCLATURE

Cp spesific heat capacity g gravitational acceleration k thermal conductivity Nu Nusselt number Pr Prandtl number Ra Rayleigh number T fluid temperature U, V dimensionless velocity components in the X- and Y-direction X, Y dimensionless space coordinates Greek symbols α thermal diffusivity β thermal expansion coefficient ν kinematic viscosity ø solid volume fraction Ψ dimensionless stream function Θ dimensionless temperature Ω dimensionless temperature ρ density of fluid μ dynamic viscosity

Subscript c cold f fluid h hot nf nanofluid 0 reference value p particle Superscript — mean

Page 207: Proc.CIAM2010[1]

200

Optimization model for estimating productivity growth in Malaysian food manufacturing industry

Nordin Hj. Mohamad(1), and Fatimah Said(2) (1)Institute of Mathematical Sciences, University of Malaya, 56300 Kuala Lumpur, Malaysia

(Email: [email protected]) (2)Faculty of Economics and Administration, University of Malaya, 56300 Kuala Lumpur, Malaysia

(Email: [email protected]).

ABSTRACT In this study, a mathematical programming-based optimization technique known as data envelopment analysis, DEA is used to compute and analyze the decomposition of Malmquist index of total factor productivity, TFP into technological change, technical efficiency change and scale efficiency change by utilizing an output-oriented DEA model under the assumptions of constant and variable returns to scale. The methodology is applied to selected 5-digit Malaysian food manufacturing industries using annual time-series data for the period 2002 – 2007. The results suggest that the TFP growth is largely due to positive technological change rather than technical efficiency change. KEYWORDS Data envelopment analysis, Malmquist productivity index, technological change, technical efficiency change. I. INTRODUCTION Decomposing productivity growth is important in identifying and understanding the sources of growth to aid and provide directions to decision makers in their policy making. Traditionally, productivity can be mathematically defined as the relationship between a set of input values and the output values. This gives rise to the concept of partial productivity which represents the change of output produced corresponding to each input used such as labour productivity and capital productivity. However, as technology progesses, it is observed that it is possible to produce more from less inputs by adopting better means and methods of production. It is therefore essential to conduct and analyze the productivity trends as well as the technological changes in order to understand the industrial situation and the productivity dynamics. Thus, output or productivity growth is not attributed to growth in inputs only. Improvement in input qualities, efficient use of production processes, adoption of new

technology and other non-physical factors do contribute to the dynamics of productivity growth. This non-physical contributor is known as total factor productivity, TFP. In short, TFP addresses any effect in total output not caused by inputs or economies of scale and is often found to be a significant contributor to output growth. Improvement in TFP will enable the industry to generate a larger output from the same resources, and hence shifting it to a higher frontier. The technological change component of productivity growth captures shift in the frontier technology and can be interpreted as providing a measure of innovation. Technical efficiency improvement or catching up effect, on the other hand, is measured by the difference between the frontier output and the realized output. Thus, the decomposition of TFP into technological change and technical efficiency change is therefore useful in distinguishing innovation or adoption of new technology by best practice firms from the diffusion of technology. The rest of the paper is organized as follows. The next section reviews selected literature in productivity performance analysis in the manufacturing sector, This is followed by definition of DEA output distance function for two time periods, the formulation and decomposition of Malmquist TFP growth index. The methodolody is applied to a set of 32 selected Malaysian food manufacturing sub-industries for the period 2002–2007. Results and findings are presented, followed by concluding remarks in the final section. II. LITERATURE REVIEW Various techniques have been used in the study of productivity growth. This includes the Divisia index model, growth accounting approach, aggregate or frontier production function estimates, stochastic varying coefficient approach and the non-parametric DEA.

Page 208: Proc.CIAM2010[1]

201

An empirical analysis which focuses on the convergence hypothesis and scale effects in explaining different productivity growth rates within the manufacturing sectors of Canada and the United States is addressed by Mullen and Williams [7]. Fare et al. [2] analyzes productivity growth in 16 of Taiwan’s manufacturing industries during the period 1978-1992 by utilizing the DEA approach to compute the Malmquist TFP index. Mahadevan and Kim [4] examines the sources of output growth of four selected South Korean manufacturing industries from 1980 to 1994 using firm level data within each industry. On the Malaysian scenario, Zulaifah and Maisom [8] utilized the Dollar and Sokoloff model and an econometric approach to determine the intensity of use of factors of production in mamufacturing industries. They obtained an estimated growth rate of 4.32 percent per annum for all industries for the period 1985-1998 with light and medium industries exhibiting the highest TFP growth rate of 17.3 percent per annum, followed by heavy industries at 11.7 percent per annum and resource based industries at 6.4 percent per annum. However, no analysis was conducted on the components or sources of TFP growth. A recent study by Ismail [3] found that the technical efficiency change in food-based industry for the period 1985-2003 is low with a negative overall mean of -0.976 percent per annum. Only twenty percent of the twenty-five sub-industries investigated experience positive technical efficiency change. However all the sub-industries experience positive technical or technological change with a mean value of 1.036 or 3.6 percent change. The TFP index exhibits positive growth with an overall mean of 1.011 (ranging from 0.949 to 1.073). In this study we utilize the concept of DEA Malmquist productivity index to decompose and analyze the TFP change into technical efficiency change and technology or frontier shift. The technical efficiency change will further be decomposed into pure technical efficiency change and scale efficiency change. The model is applied to thirty-two selected Malaysian food manufacturing sub-industries for the period 2002-2007. III. DEA MALMQUIST INDEX Fare et al. [1] construct the DEA-based Malmquist productivity index as the geometric mean of two Malmquist productivity indexes

which are defined by a distance function relative to two different time periods. Definition 1. Consider K decision making units (DMUs), each utilizing inputs

NtkX )(

to produce outputs Mt

kY )(, k

= 1,2,…,K at time t. The output distance function for DMU-k with respect to two

different time periods, ),( )1()1()( t

kt

kt

k YXD

under the assumptions of constant returns to scale, CRS, is defined by an output oriented DEA linear programming problem such that

1)1()1()( ),( tk

tk

tk YXD = Max k (1)

s.t

,)1(

)()(

)(1

tnk

tnj

K

jj XX

,,...,3,2,1 Nn (2)

K

j

tmkk

tmjj YY

1

)1()(

)()( ,

,,...,3,2,1 Mm (3)

,0j .,...,2,1 Kj (4)

A similar definition applies to

),( )()()( tk

tk

tk YXD

, ),( )()()1( t

kt

kt

k YXD

and

),( )1()1()1( tk

tk

tk YXD

. Definition 2. The Malmquist index of TFP for

DMU-k, ),,,( )()()1()1( t

kt

kt

kt

kk YXYXM

can be specified as the geometric mean of the productivity changes between two time periods such that

.),(

),(),(

),(

),,,(

2/1

)()()1(

)1()1()1(

)()()(

)1()1()(

)()()1()1(

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tkk

YXD

YXD

YXD

YXD

YXYXM

(5)

A value of Mk > 1 indicates positive TFP growth or gain, Mk < 1 indicates TFP decline or loss, and Mk = 1 implies stagnation or no change in TFP for DMU-k from time t to t+1. Equation (5) can also be equivalently written as

Page 209: Proc.CIAM2010[1]

202

.),(

),(),(

),(

.),(

),((...)

2/1

)()()1(

)()()(

)1()1()1(

)1()1()(

)()()(

)1()1()1(

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

k

YXD

YXD

YXD

YXD

YXD

YXDM

(6) The first component,

,),(

),()()()(

)1()1()1(

tk

tk

tk

tk

tk

tk

k YXD

YXDTEC

(7) measures the change in technical efficiency over the two periods, whether or not DMU-k is getting closer to its efficiency frontier over time. This term is normally referred to as the catching up effect since it measures the degree of catching up to the best-practice frontier over time. The second component,

2/1

)()()1(

)()()(

)1()1()1(

)1()1()(

),(),(

),(),(

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

tk

kYXD

YXD

YXD

YXDFS

(8) measures the technology change over the two periods, whether or not the frontier is shifting out over time. It can be viewed as a geometric mean of the change or shift in the frontier of technology or innovation experienced by DMU-k from time t to t +1. Hence the Malmquist TFP index is simply the product of technical efficiency change and technological change. That is, TFP growth = (TEC) . (FS) (9) Fare et al. [1] further propose an enhanced decomposition of TEC (relative to CRS frontier) into a pure technical efficiency change, PTEC component (relative to a variable returns-to-scale, VRS frontier), and a residual scale efficiency change, SEC component which captures changes in the deviation between the VRS and CRS technology. That is, TEC = (PTEC) . (SEC) (10)

The complete decomposition for DMU-k thus becomes Mk(.) = (TFP growth)k = (TEC)k . (FS)k

= (PTEC)k.(SEC)k.(FS)k , k=1,..,K. (11) For evaluation under the assumption of VRS, an additional convexity constraint

,11

K

jj

is imposed when solving the linear programming problem (1)-(4). IV. EMPIRICAL IMPLEMENTATION Data source The data used in the study are annual time-series data for 32 selected 5-digit Malaysian food manufacturing sub-industries for the period 2002-2007, compiled from the Annual Survey of Malaysian Manufacturing Industries, published by the Department of Statistics, Malaysia [5]. Table 1 lists these sub-industries and their 5-digit Malaysian Standard Industrial Code (MISC). A single measure of output, value added deflated by the consumer price index for food, is used. Cost of inputs, total number of workers and total fixed assests constitute three measures of input. The cost of inputs and total fixed assets were deflated by producer price index for goods in the domestic economy for manufactured goods. Both deflators with 2000 as the base year were obtained from the Economic Report published by the Ministry of Finance, Malaysia [6].

Table 1. The Malaysian food manufacturing sub-industries

5-Digit MSIC

Sub-industry

15111 15119 15120 15131 15139 15141 15142 15143 15144 15149 15201 15202 15311

Production of poultry and poultry products Production of meat and meat products Production of fish and fiah products Pineapple canning Canning of other fruits and vegetables Manufacture of coconut oil Manufacture of crude palm oil Manufacture of refined palm oil Manufacture of palm kernel oil Manufacture of other oils and fats Manufacture of ice cream Manufacture of milk Rice milling

Page 210: Proc.CIAM2010[1]

203

15312 15319 15322 15323 15330 15411 15412 15420 15431 15432 15440 15491 15492 15493 15494 15495 15496 15497 15499

Flour milling Manufacture of other mill products Manufacture of glucose and maltose Manufacture of sago and tapioca products Manufacture of animal feeds Manufacture of biscuits and cookies Manufacture of bakery products Manufacture of sugar Manufacture of cocoa products Manufacture of chocolate and sugar products Manufacture of macaroni and similar products Manufacture of ice (excluding dry ice) Manufacture of coffee Manufacture of tea Manufacture of spices and curry powder Manufacture of nut and nut products Manufacture of sauces Manufacture of snack (cracker/chips) Manufacture of other food products

Next, we solve the DEA output-oriented model under the assumptions of CRS and VRS for each year. Results for the mean efficiency scores and returns to scale are summarized in Table 2. Technical and scale efficiency Out of the 32 sub-industries, only one sub-industry (15491 Manufacture of ice) obtains a scale efficiency score of 100 percent, implying that it is technically efficient in all years under evaluation and is operating on the frontier at the most productive scale size, mpss. Two other sub-industries (15420 Manufacture of sugar and 15496 Manufacture of sauces) are close to mpss. In fact, both sub-industries achieved scale efficiency score of 100 percent in five of the six years under consideration. The result also implies that, in general, more than 90 percent of the sub-industries were operating inefficiently and they need to increase their output (or reduce their inputs) to become efficient. The average PTE score was 80.93 percent during 2002-2007. This finding suggests that if these sub-industries were operating efficiently, they could have produced 19.07 percent more output. Nevertheless, more than sixty percent of the sub-indutries were more than 90.0% scale efficient. Returns to scale Apart from the inefficiency that could arise in the conversion process, another reason for the inefficiency of the inefficient units could be attributed to the scale of operations. DMUs that do not operate at the most efficient (or productive) scale size cannot be fully efficient. The inefficiency may arise because it is operating under decreasing returns to scale, drs or increasing returns to scale, irs. Whether a

DMU is operating under irs or drs can be determined by observing its TE and PTE scores, such that

Table 2. Mean efficiency scores, 2002-2007

5-Digit MSIC

Technical Pure technical Scale efficiency efficiency efficiency

Returns to scale

15111 15119 15120 15131 15139 15141 15142 15143 15144 15149 15201 15202 15311 15312 15319 15322 15323 15330 15411 15412 15420 15431 15432 15440 15491 15492 15493 15494 15495 15496 15497 15499

0.5259 0.5823 0.9093 0.5299 0.5868 0.9242 0.6235 0.7749 0.8071 0.5327 0.6088 0.8853 0.7509 0.7706 0.9723 0.5172 0.9947 0.5196 0.5520 1.0000 0.5520 0.8482 0.9750 0.8671 0.7438 0.7615 0.9764 0.5668 0.6029 0.9296 0.7973 0.8200 0.9734 0.8931 1.0000 0.8931 0.3780 0.3978 0.9595 0.8299 0.8588 0.9607 0.7638 0.8517 0.9035 0.6287 1.0000 0.6287 0.6081 0.7035 0.8841 0.6026 0.7194 0.8360 0.6213 0.7376 0.8454 0.7194 0.9836 0.7326 0.9957 0.9986 0.9970 0.6182 0.6525 0.9482 0.8817 0.9260 0.9503 0.6864 0.7458 0.9176 1.0000 1.0000 1.0000 0.8174 0.8364 0.9797 0.5854 0.7295 0.7855 0.6568 0.6701 0.9800 0.8205 0.8930 0.9142 0.9415 0.9545 0.9837 0.7156 0.7609 0.9379 0.9463 1.0000 0.9463

drs drs drs irs irs irs drs drs irs drs irs drs drs drs irs irs irs drs drs drs mpss irs drs drs mpss drs irs drs irs mpss drs drs

Average Std.dev Maximum Minimum

0.7093 0.8093 0.8844 0.1577 0.1602 0.1223 1.0000 1.0000 1.0000 0.3780 0.3978 0.5196

Note: drs and irs refer to decreasing and increasing returns to scale respectively. • if TE = PTE , CRS prevails • if TE ≠ PTE , then

1

1

K

jj

→ irs,

1

1

K

jj

→ drs. The last column in Table 2 records the returns to scale based on the most frequent occurance observed during the years under consideration.

Page 211: Proc.CIAM2010[1]

204

As mentioned earlier, three (9.375%) sub-industries appeared to be operating at their mpss. Eighteen (56.250%) of sub-industries exhibited drs. These sub-industries should scale down their scale of operation if they were to operate on the frontier. The remainder eleven (34.375%) exhibited irs. These sub-industries should expand their scale of operation in order to become scale efficient. The average scale efficiency score in the sample for the period 2002-2007 was 88.48 percent, ranging from a minimum of 51.96 percent to a maximum of 100 percent. Malmquist productivity change Table 3 presents a summary of the annual geometric means of the Malmquist productivity index and its components. As can be observed, on average, the TFP for food manufacturing sub-industries showed a small increase of 0.30 percent per annum, ranging from -10.8 percent to 9.3 percent. The technological or frontier shift recorded a positive change for all sub-industries, implying that growth was largely attributable to innovation. The technological change (frontier shift or innovation) improved by 3.4 percent (ranging from 0.1% to 10.3%) per annum while technical efficiency regressed by 3.0 percent (ranging from -15.3% to 5.4%) per annum. Hence catching up is a problem facing most sub-indutries. Only six (18.75%) of the sub-industries showed improvement in all components. This includes two which exhibited mpss, three drs and one irs.

Table 3. Mean Malmquist productivity index change,

2002-2007 5-Digit MSIC

Mk(.) FSk TECk PTECk SECk (TFP growth)

15111 15119 15120 15131 15139 15141 15142 15143 15144 15149 15201 15202 15311 15312 15319 15322 15323 15330 15411 15412

1.015 1.009 1.006 0.986 1.021 0.982 1.044 0.941 0.887 1.061 0.980 1.024 0.957 0.944 1.014 0.892 1.053 0.847 0.788 1.075 1.029 1.043 0.987 0.973 1.014 1.049 1.071 0.979 1.000 0.979 1.027 1.016 1.011 1.000 1.011 1.019 1.033 0.988 1.033 0.956 0.984 1.041 0.946 0.959 0.986 0.939 1.015 0.925 0.966 0.958 1.091 1.097 0.995 1.004 0.991 0.991 1.038 0.955 1.000 0.955 1.013 1.036 0.977 0.963 1.015 0.974 1.005 0.969 0.972 0.997 0.999 1.010 0.989 0.997 0.992 0.975 1.075 0.907 1.000 0.907 1.086 1.103 0.985 0.915 1.076 1.075 1.038 1.036 1.031 1.005 1.009 1.012 0.996 1.024 0.973 0.989 1.011 0.979 1.011 0.968

15420 15431 15432 15440 15491 15492 15493 15494 15495 15496 15497 15499

1.046 1.041 1.005 1.002 1.003 1.069 1.079 0.991 0.985 1.006 0.976 1.054 0.926 0.946 0.979 0.940 1.001 0.939 0.975 0.963 1.041 1.041 1.000 1.000 1.000 1.040 1.007 1.033 1.009 1.024 1.093 1.037 1.054 1.003 1.051 0.992 1.008 0.984 0.997 0.987 0.932 1.022 0.912 0.935 0.975 0.946 1.013 0.934 0.952 0.981 0.923 1.016 0.909 0.934 0.973 0.976 1.009 0.967 1.000 0.967

Average Std.dev Maximum Minimum

1.003 1.034 0.970 0.975 0.996 0.051 0.027 0.043 0.048 0.036 1.093 1.103 1.054 1.033 1.076 0.892 1.001 0.847 0.788 0.907

Note: All Malmquist index averages are geometric means. Fifteen (46.87%) of the sub-industries showed positive TFP growth while another seventeen (53.3%) recorded negative growth. The highest TFP growth comes from sub-industry 15493 Manufacture of tea (9.3 percent per annum) while the lowest is from sub-industry 15131 Pineapple canning (-10.8 percent per annum). The remainder twenty-six (81.25%) indicated a decline in TFP performance. Technological change (frontier shift) All sub-industries, on average, experienced technological progress since the FSk index attains a value greater than one for all k=1,...,32. The average score was 1.034, indicating a 3.4 percent technological progress per annum. The highest technological progress of 10.3 percent per annum was achieved by sub-industry 15323 Manufacture of sago and tapioca products while the lowest innovative improvement of 0.1 percent per annum was recorded by sub-industry 15440 Manufacture of macaroni and similar products. Technical efficiency change (catching up effect) Only seven (21.9%) showed improvement in technical efficiency with sub-industry 15493 Manufacture of tea attaining the highest score of 1.054 (catching up rate of 5.4% per annum). Twenty-five (78.1%) sub-industries appeared to be lagging behind with sub-industry 15131 Pineapple canning recording the lowest score of 0.847 (a decline of -15.3% per annum). On average, the group was found to be staggering behind at -3.0% per annum. This indicates that technical efficiency is not improving in line with technological progress. In other words, the gap to the efficient frontier is widening.

Page 212: Proc.CIAM2010[1]

205

Pure technical efficiency change As mentioned earlier, TEC is the product of PTEC and SEC. Eight (25%) of the sub-industries indicated increase in pure technical efficiency with sub-industry 15143 Manufacture of palm oil taking the lead with improvement of 3.3 percent per annum. Eighteen (56.25%) indicated a decrease with sub-industry 15131 Pineapple canning retaining the lowest score of negative growth at -21.2 percent per annum. The remainder six (18.75%) sub-industries showed no change during the period under consideration as indicated by their PTEC score of unity. Scale efficiency change SEC of fourteen (43.75%) of the sub-industries contibuted positively to the productivity change since their scores exceed one. Sub-industry 15323 Manufacture of sago and tapioca products recorded the highest score of 1.076 (a change of 7.6 percent per annum), while sub-industry 15322 Manufacture of glucose and maltose recorded the lowest score of 0.907 (a change of -9.3 percent per annum). The average score for the group is 0.996 (a small decrease of 0.4 percent per annum). Observations From the above discussions, we can highlight a few observations. • The Malmquist TFP index for Malaysian food manufacturing industry indicated only a small increase of 0.3 percent per annum. • The TFP growth is largely due to innovation (a positive shift in the frontier) rather than technical efficiency change (catching up effect). • A decrease in TEC is attributable to both decrease in PTEC and SEC. • Sub-industry 15493 Manufacture of tea achieved the highest TFP growth with all components indicating positive changes. • Sub-industry 15131 Pineapple canning recorded the lowest TFP growth with the lowest TEC score despite encouranging improvement in FS score (above group-average). The low TEC was due to the lowest PTEC despite a relatively high SEC. V. CONCLUSIONS In this study, we have estimated the Malmquist TFP index and its decompositions using the output-oriented DEA distance functions for 32 Malaysian food manufacturing sub-industries for the period 2002-2007. The findings indicate that TFP only grew at a slow rate of

0.3 per cent per annum despite an encouraging frontier shift or innovative improvement of 3.4 percent per annum. This is due to a decline in the catching up effect or TEC of -3.0 percent per annum which is further attributable to decrease in both PTEC and TEC. Only three sub-industries were found to be operating efficiently (exhibiting mpss) while twenty-nine exhibit variable returns to scale, indicating the needs for operation adjustments. The findings suggest that eighteen and eleven of these sub-industries should scale down and expand their scale of operations respectively if they were to be operating on the efficient frontier. The study is not without limitations. DEA is non-stochastic and does not capture random noise; thereby may have over estimated the magnitude of inefficiencies. The data utilized in the study are aggregated sub-industries data and not firm level data. This is because firm level data is not easily accessible. The study also assumes that all sub-industries under evaluation are fairly homogenous, utilizing similar set of inputs to produce identical outputs. This can only be achieved if we are evaluating a group of firms operating similar business activities such as banking or financial institutions, hospitals and others. Lastly, the study provides avenues for further exploration. The methodology can be revised, expanded and applied to other public and private organizations. REFERENCES [1]. Fare, R,; Grosskopf, S,; Norris, M, and Zhang, Z (1994). ‘‘Productivity growth, technical progress and efficiency change in industrialized countries,’’ American Economic Review, Vol 84, No 1, pp 66-83. [2]. Fare, R,; Grosskopf, S, and Lee, W (2001). ‘‘Productivity and technical change: the case of Taiwan,’’ Applied Economics, Vol 33, pp1911-1925. [3]. Ismail, R (2009). ‘‘Technical efficiency, technical change and demand for skills in Malaysian food-based industry,’’ European Journal of Social Sciences, Vol 9, No 3, pp504-515. [4]. Mahadevan, R, and Kim, S (2003). ‘‘Is output growth of Korean manufacturing firms productivity driven,’’ Journal of Asian Economics, Vol 14, pp669-678. [5]. Malaysia (various years). Annual survey of manufacturing industries, Department of Statistics, Malaysia.

Page 213: Proc.CIAM2010[1]

206

[6]. Malaysia (various years). Economic Report, Ministry of Finance, Malaysia. [7]. Mullen, JK, and Williams, M (1994). ‘‘Convergence scale and relative production performance of Canadian-US manufacturing industries,” Applied Economics, Vol 26, pp739-750. [8]. Zulaifah, O, and Maisom, A (2001). ‘‘Pattern of total factor productivitry (TFP) growth in Malaysian manufacturing industries, 1985-1995,’’ in Yew, TS and Alias, R (eds.), Selected Readings on Economic Analysis of Industries and Natural Resources, Universiti Putra Malaysia Press, 2001, pp72-105.

Page 214: Proc.CIAM2010[1]

207

Numerical study of natural convection in a porous cavity with transverse magnetic field and non-uniform internal heating

Habibis Saleh(1), Ishak Hashim(2 ) and Rozaini Roslan(3 ) (1,2)Modelling & Data Analysis Research Centre, School of Mathematical Sciences

Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor, Malaysia (Email: [email protected] (1))

(3)Centre for Research in Applied Mathematics, Faculty of Science, Arts & Heritage Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Johor, Malaysia

ABSTRACT The effect of a transverse magnetic field on the steady convective flow within a square region filled with a fluid-saturated porous medium having non-uniform internal heat generation at a rate proportional to a power of the temperature difference is studied numerically. The side vertical walls are maintained isothermally at different temperatures. The top and bottom horizontal straight walls are kept adiabatic. The porous medium modeled according to Darcy’s law. The formulation of differential equations is nondimensionlized and solved by finite difference method. It is found that with application of an external magnetic field, the flow and temperature field and the heat transfer rate are significantly modified. KEYWORDS Natural convection; Darcy's law; Magnetic field; Non-uniform heat generation; Finite difference method. I.INTRODUCTION Convective flows in porous media have occupied the central stage in many fundamental heat transfer analysis and has received considerable attention over the last few decades. This interest is due to its wide range of applications, for example, high performance insulation for buildings, chemical catalytic reactors, packed sphere beds, grain storage and such geophysical problems as frost heave. Porous media are also of interest in relation to the underground spread of pollutants, solar power collectors, and to geothermal energy systems. Convective flows can develop within these materials if they are subjected to some form of temperature gradient. In some situations of considerable practical importance, porous material provides its own source of heat, giving an alternative way in which a convective flow can be set up through

local heat generation within the porous material. Such a situation can arise through radioactive decay or through, in the present context, a relatively weak exothermic reaction within the porous material. This can happen, for example in the self-induced heating of coal stockpiles and bagasse-piles. Convective flows are governed by nonlinear partial differential equations, which represent conservation laws for the mass, momentum and energy. Numerical treatment usually used to solve the equations or if possible by an analytic method. Numerical investigation of natural convection in porous enclosure having uniform internal heating firstly conducted by Ref. [1]. Then solved analytically by Ref. [2]. Recently, Ref. [3] moved away from the study of uniform internal heating to that of non-uniform internal heating and solved the problem numerically and analytically. Previously abundant studies of natural convection in the presence of magnetic field in an enclosure filled with a viscous and incompressible fluid has been studied for example by Ref. [4] and Ref. [5]. However, there are very few studies on natural convection of a conducting fluid saturating a porous medium in the presence of magnetic field in an enclosure. To the best of our knowledge, the first investigation of this problem is due to Ref. [6] who studied the effect of an electromagnetic field in the horizontal direction in a rectangular porous cavity when the side walls are asymmetrically heated. Then Ref. [7] extended this to include an inclined magnetic filed when the side walls are symmetrically heated. In reality, convective flows in porous media as consequence of asymmetrically heated having internal heat source is a more appropriate model for practical problem. Therefore, the present paper investigates the effect of an

Page 215: Proc.CIAM2010[1]

208

(5)

(6)

(1)

(2)

(3)

(4)

inclined magnetic field on steady natural convection in a square cavity filled with a porous medium saturated with an electrically conducting fluid having internal heat generation. The rate of internal heat generation is assumed proportional to a power of the temperature difference. II. MATHEMATICAL FORMULATION We consider the steady, two-dimensional natural convection flow in a square region filled with an electrically conducting fluid-saturated porous medium, see Figure 1. The co-ordinate system employed is also depicted in this figure. The top and bottom surfaces of the convective region are assumed to be thermally insulated and the sloping surfaces to be heated and cooled at constant temperatures hT and cT , respectively. Heat is assumed to be generated internally within the porous medium at a rate proportional to ( ) , ( 1)p

cT T p . This relation, as explained by Ref. [3], is an approximation of the state of some exothermic process. A uniform and constant magnetic field B

is applied normal to gravity direction. The viscous, radiation and Joule heating effects are neglected. The resulting convective flow is governed by the combined mechanism of the driven buoyancy force and the retarding effect of the magnetic field. The magnetic Reynolds number is assumed to be small so that the induced magnetic field can be neglected in favor of the applied magnetic field. Under the above assumptions, the conservation equations for mass, momentum under the Darcy approximation, energy and electric transfer are given by:

· 0 ,V

( ),KV g I BP

'''2 0( · ) ( ) ,p

m cp

Vq

T T T Tc

· 0 ,I

( ),I V B

0[1 ( )]cT T

Figure 1. Schematic representation of the model

where ( , )V u v

is the fluid velocity

vector, T is the fluid temperature, P is the pressure, B

is the external magnetic

field, I

is the electric current, is the electric potential, g

gravitational acceleration vector, K is the permeability of the porous medium, m is the effective thermal diffusivity, is the density, is the dynamic viscosity, is the coefficient of thermal expansion, pc is the specific heat at constant pressure, is the electrical conductivity, 0 is the reference density and is the associated electric field. Eqs. 4 and 5 reduce to 2 0 , the unique solution is 0 since there is always an electrically insulating boundary around the enclosure. Thus, it follows that the electric field vanishes everywhere [4]. Furthermore, eliminating the pressure term in Eq. 2 in the usual way then the governing Eqs. 1~6 can be written as

Page 216: Proc.CIAM2010[1]

209

(7)

(9)

(8)

(10)

(11)

(12)

(13)

(14)

(15)

(17)

(16)

0,u v

x y

20 ,KBu v gK T u

y x x y

2 2

2 2

'''0 ( ) .

m

pc

p

T T T Tu v

x y x y

qT T

c

where 0B is the magnitude of B

and is

the kinematic viscosity of the fluid. The following are the boundary conditions:

on : 0, ,cx u T T on 0 : 0, ,hx u T T

on , 0 : 0, 0.Ty y v

y

Now we introduce the following non-dimensional variables

, , , , c

m m

T Tx yX Y U u V v

T

Introducing the stream function defined as /U Y and /U X , and using expressions 11 in Eqs. 7~9, we obtain the following partial differential equations in non-dimensional form:

2 22

2 2(1 ) ,Ha RaX Y X

2 2

2 2 .pGX Y Y X X Y

Subject to the boundary conditions on : 0, 0,X L on 0 : 0, 1,X

on , 0 : 0, 0.Y L YY

where / ( )mRa gK L T is the Rayleigh number,

''' 2 10 ( ) / ( )p

m pHa q T c is the internal heat generation parameter and

20 /Ha KB is the Hartmann number

for the porous medium. Once we know the temperature we can obtain the rate of heat transfer from each of the vertical walls, which are given in terms of the mean Nusselt number as

1

00

dhX

Nu YX

at the hot wall and

1

10

dcX

Nu YX

at the cold wall. III. FINITE DIFFERENCE METHOD We employed the finite difference method to solve Eqs. 12 and 13 subject to 14. The central difference method was applied for discretizing the equations. The solution of the algebraic equations was performed using the Gauss-Seidel iteration with relaxation method. The unknowns and are calculated until the following convergence criterion is fulfilled:

1

, ,max ,

1,

n ni j i j

ni j

where is either or , n represents the iteration number and is the convergence criterion. In this study, the convergence criterion is set at 610 . The mean Nusselt number for each vertical walls was calculated using the integration trapezoidal rule. IV. RESULT AND DISCUSSION Figure 2 shows the streamlines and isotherms for the strong heating case (G=5) with Ra=300, p=1 and a range of values of Ha. In

Page 217: Proc.CIAM2010[1]

210

the absence of a magnetic field, the fluid motion as shown in the figure 2(a) is described as follow. Since the temperature of the left wall is higher than that of the fluid inside the enclosure, the wall transmits heat to the fluid and raises the temperature of fluid particles adjoining the left wall. As the temperature rises, the fluid moves from the left wall (hot) to the right wall (cold), falling along the cold wall, then rising again at the hot wall. This movement creates a clockwise vortex cell inside the enclosure and the isotherms start either from the hot wall or from the bottom wall and end at the top wall or cold wall. When the magnetic field is relatively strengthened (figure 2(b)), the central streamlines cell are becoming upright and the maximum temperature drifts towards the center of the top wall. The internal heat generation enhances the flow near the cold wall and forces the streamlines in a denser distribution. On the other hand, the negative buoyancy caused by G produces a vertical downward flow motion in the vicinity of the top corner hot wall. The maximum temperature increase above that on the heated wall. This is due to the fact that the hot fluid reaching the top left corner of the enclosure is unable to reject energy since the velocities are small. Finally, for large Ha (figure 2(c)), the core vortex is elongated vertically and the isotherms are almost parallel to the vertical wall, this implies that conduction is dominant. The variations of the mean Nusselt number along the hot wall and the cold wall with Ha for several values of G are shown in Figure 3. Naturally, the heat transfer is maximum in the absence of a magnetic field, because the convection is maximum for this situation. In general, the mean Nusselt number along the hot wall and the cold wall initially decreases steeply with Ha. As the value of Ha is made larger, the strength of the convective motion is progressively suppressed and the mean Nusselt number along the hot wall and the cold wall towards unity for G=0. For a fixed moderate value of Ha, increasing G has the effect of decreasing the mean Nusselt number along the hot wall and increasing the mean Nusselt number along the cold wall.

Figure 2. Contour plots of the stream function and temperature for Ra=300, G=5, p=1 and (a) Ha=0, (b)

Ha=5, (c) Ha=20.

(a)

Page 218: Proc.CIAM2010[1]

211

(b)

Figure 3. Plots of the mean Nusselt number along the hot wall (a) and along the cold wall (b) against Ha for the values of G labeled on the figure with Ra=100, p=1. V. CONCLUSIONS The present numerical study exhibits many interesting features concerning the effect of the inclined magnetic fields on natural convection in square enclosure filled with a porous medium having non-uniform internal heat source. The results of the numerical analysis lead to the following conclusions. In general, effect of magnetic field is to retard the flow circulation and normalize the temperature distribution. Combination of the large of applied magnetic field having a strong internal heat source was found to be most effective in suppressing the rate of heat transfer along the hot wall. However, combination of the large of applied magnetic field having a low internal heat source was found to be most effective in suppressing the rate of heat transfer along the cold wall.

REFERENCES [31]. Haajizadeh, M, Ozguc, AF, and

Tien, CL (1984). “Natural convection in a vertical porous enclosure with internal heat generation,” Int J Heat Mass Transfer, Vol 27, pp 1893-1902.

[32]. Joshi, MV, Gaitonde, UN, Mitra, SK (2006). “Analytical study of natural convection in a cavity with volumetric heat generation,”J Heat Transfer, Vol 128, pp176-182.

[33]. Mealey, LR, and Merkin, JH (2009). “Steady finite rayleigh number convective flows in a porous medium with internal heat generation,” Int J Thermal Sci, Vol 48, pp 1068-1080.

[34]. Alchaar, S, Vasseur, P, and Bilgen, E (1995). “Natural convection heat transfer in a rectangular enclosure with a transverse magnetic field,” J Heat Transfer, Vol 117, pp 668-673.

[35]. Al-Najem, NM, Khanafer, KM, El-Refaee, MM, (1998). “Numerical study of laminar natural convection in tilted enclosure with transverse magnetic field,” Int J Numer Meth Heat Fluid Flow, Vol 8, pp 651-672.

[36]. Bian, W, Vasseur, P, Bilgen, E, and Meng, F, (1996). “Effect of an electromagnetic field on natural convection in an inclined porous layer,” Int J Heat Fluid Flow, Vol 17, pp 36-44.

[37]. Grosan, T, Revnic, C, Pop, I, Ingham, DB, (2009). “Magnetic field and internal heat generation effects on on the free convection in a rectangular cavity filled with a porous medium,” Int J Heat Mass Transfer, Vol 52, pp 1525-1533.

Page 219: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

212

THE DISTRIBUTION PATTERN AND ABUNDANCE OF ASTEROID AND ECHINOID

AT RINGGUNG WATERS SOUTH LAMPUNG

Arwinsyah Arka*, Agus Purwoko*, Oktavia

* Department of Biology, Faculty of Mathematics and Natural Sciences, Sriwijaya University. 0818561648

ABSTRACT Te research of the distribution pattern and abundance of Asteroid and Echinoid was carried out in 28 June-11 July 2008 at Ringgung waters South Lampung. The aim of the research is to collect information about distribution pattern and abundance of Asteroid and Echinoid at Ringgung waters. The research was made by applying square transect method at five stations. The result was analyzed descriptively. The research showed that there were 2 species of Asteroid which consist of Archaster typicus and Culcita novaeguineae and 2 species of echinoid which consist of Diadema setosum and Aganum sp. with distribution patterns is clumped and the abundance of Ateroid and echinoid is about 0,2-5.7 ind/m2. The factors that influenced distribution pattern were differences habitat and living habits of each species. Key words distribution pattern, abundance, asteroidea and echinoidea, ringgung waters INTRODUCTION Echinodermata consist of 5 classes such as: Asteroidea , Echinoidea , Holothuroidea , Ophiuroidea and Crinoidea . Sea urchin gonad can be consummed. Cina, Hongkong, Korea, Jepang and American people farm sea urchin, hoever, sea stars are decoretted value. Sea urchin abundand at Ringgung area (Lampung Selatan) but no scientific information is available. This research will find out sea urchin population. METHOD Research carry out on 28 Juni - 11 Juli 2008 at Ringgung. Kabupaten Lampung Selatan Lampung Province. Ringgung is recreasion site and fish farm Sampling method was Purposive Sampling. There are 5 stations which laid a 100 meter transect which is consisted 10 sampling plots (1 x 1 m). The cordinate of 5 stations are: Station I (sandy beach) : S 05º 33’29.12”, E 105º 15’13.72”

Station II (sandy beach): S 05º 33’23.18”, E 105º 15’ 8.24” Station III (muddy beach): S 05º 33’17.24”, E 105º 15’11.19” Station IV (muddy beach): S 05º 33’10.30”, E 105º 15’14.15” Station V (muddy beach): S 05º 33’ 4.36”, E 105º 15’17.10” Asteroidea and Echinoidea were observed and counted. Species name is macthed by Clark & Rowe book . Diversity indeces is applied for data analysing. RESULTS AND DISCUSSION At least, we found 4 species ta Ringgung area. They are classified as well as: 1. Fylum : Echinodermata Class : Asteroidea Ordo : Valvatida Family : Archasteridae Genus : Archaster Spesies : Archaster typicus,

Muller & Troschel

Page 220: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

213

Figure 1. Archaster typicus,

Muller & Troschel

2. Fylum : Echinodermata Class : Asteroidea Ordo : Valvatida Family : Oreasteridae Genus : Culcita Spesies : Culcita novaeguineae

Figure 2. Culcita novaeguineae, Muller & Troschel 3. Fylum : Echinodermata Class : Echinoidea Ordo : Diadematoida Family : Diadematidae Genus : Diadema Spesies : Diadema setosum

Figure 3. Diadema setosum, Leske

Page 221: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

214

4. Fylum : Echinodermata Class : Echinoidea Ordo : Clypeasteridae Family : Laganidae Genus : Laganum Spesies : Laganum sp.

Figure 4. Laganum sp.

Table 1. Morisita index at each station

Distribution pattern is cluster or clumped.

No. Jenis Density (ind/m²)

Station I Station II Station III Station IV Station V

1 2 3 4

Archaster typicus Culcita novaeguineae Diadema setosum Laganum sp.

3.6 - 3.3 -

3.5 - 3.7 -

4.4 0.2 4.6 1.0

4.9 0.3 5.7 1.4

4.3 - 3.5 0.6

Table 2. Density of Sea urchin at Ringgung

The highest density found at station IV and the lowest density is station I.

Stasiun Shenon index (H') Sameness index (e) Dominan Index (C)

I 0.99 Low 0.99 0.5

II 0.99 0.99 0.5 III 1.48 Average 0.74 0.39

IV 1.53 0.76 0.39

V 1.29 Low

0.82 0.44

Table 3. Diversity indeces

CONCLUSIONS 1. Asteroidea and Echinoidea distribution is

cluster. 2. Archaser typicus density is 3,5-4,9 ind/m²

and Culcita novaeguineae is 0,2-0,3 ind/m² 3. Sea urchin Diadema setosum density is 3,3-

5,7 ind/m² and Laganum sp is 0,6-1 ind/m².

4. Archaster typicus dan Diadema setosum found at all staions. They well adapted at sandy and muddy habitats.

REFFERENCES Clark, A. M. and F.W.E. Rowe. 1971.

Monograph of Shallow-Water Indo-West pacific Echinoderms. Trustees of The British Museum (Nat. His.). London. 238 page

Page 222: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

215

Low biomass of macrobenthic fauna at a tropical mudflat: an effect of latitude?

Agus Purwoko a, b & Wim J. Wolff a, *

a Dept. of Marine Benthic Ecology and Evolution, University of Groningen, P.O. Box 14, 9750 AA Haren, The

Netherlands E-mail addresses: [email protected]

b Dept. of Biology, Sriwijaya University, Palembang, Indonesia E-mail addresses: [email protected]

ABSTRACT The macrobenthic animal biomass of the intertidal area of the Sembilang peninsula of South Sumatra, Indonesia, has been studied in 2004. Monthly (March – August) 21 core samples were taken at each of six sampling stations. Macrobenthic fauna was identified at the lowest taxonomical level possible and counted. Biomass was measured as ash free dry weight (afdw). The average biomass over all stations and months was 3.62 g afdw m-2, the highest biomass (14.09 g afdw m-2) found at a station in one month was due to abundant occurrence of the bivalve Anadara granosa. Low biomass of macrobenthic fauna at Sembilang peninsula cannot easily be explained but is in line with low biomasses found elsewhere in the tropics. For that reason we analyzed a data set of 268 soft-bottom intertidal biomasses collected world-wide to look for a relationship with latitude. It was shown that average biomass of intertidal macrobenthic fauna in the tropics was significantly lower (p<0.05) than that at non-tropical sites. A significant second-order relationship between biomass of macrobenthic fauna and latitude was established. Keywords biomass, macrobenthos, intertidal fauna, tropics, Indonesia, Sembilang 1. Introduction

(Warwick & Ruswahyuni 1987; Rehm 2006) studied the benthic fauna at the N coast of Central Java, Indonesia, and found low intertidal biomasses. As an explanation they suggested that in tropical waters with more or less continuous phytoplankton production zooplankton grazers are able to remain in phase with phytoplankton and prevent that a large part of the primary production reaches the bottom as food for benthic fauna. This should be in contrast to temperate latitudes where phytoplankton production is highly seasonal and much of the spring bloom settles at the bottom before the zooplankton

population has built up sufficiently to graze it. Hence, they postulated a relationship between intertidal benthic biomass and latitude suggesting this to result in lower biomasses in tropical areas compared to temperate areas. We studied the benthic fauna of the intertidal flats at Sembilang peninsula, South Sumatra, Indonesia, in 2004 and found low biomasses as well. Hence, in this paper we report on our results and investigate the possible existence of a positive relationship between latitude and biomass of macrobenthic fauna. For temperate and subtropical estuaries Kalejta and Hockey (Kalejta & Hockey 1991) found a negative relationship between estuarine invertebrate production and distance from the equator and a positive relationship between production and mean annual ambient temperature. However, production is not a good predictor of biomass, so this paper does not allow generalizations about a relationship between benthic biomass and latitude. Riese et al (1994) compared two temperate mudflats with a tropical one and concluded that biomass was lowest at the tropical site, although no biomass values were given. Piersma et al. (1993) compared benthic biomass in 20 estuaries between 57 o N and 34 o S and concluded that there was no relationship between biomass and latitude although their Fig. 2B suggests otherwise. Ricciardi and Bourget (1999) analyzed a data set from 245 sedimentary shores and found that macro invertebrate biomass for sedimentary shores did not vary linearly with latitude although peak values were found for northern and southern temperate areas. All analyses suffered from the fact that data from the tropics were less abundant than those from higher latitudes (Alongi 1990). We decided to reanalyze the data sets of Piersma et al. (1993)

Page 223: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

216

and of Ricciardi and Bourget (Ricciardi & Bourget 1999) to investigate if the low biomass at Sembilang tidal flats could be attributed to a biomass – latitude relationship.

2. Methods

2.1. Area description

(Figure 1 about here)

The Sembilang peninsula (Fig. 1: 10 59’ to 20 15’ S and 1040 45’ to 1040 53’ E) is located at the eastern coast of South Sumatra, Indonesia, and it is part of the Sembilang Nasional Park. It is influenced by the Musi River, the Musibanyuasin River and some smaller tributaries. Originally, the land was covered by mangrove and swampy forest. The most seaward belt of the mangrove vegetation is still formed by Sonneratia and Avicennia where the sediment is sandy; however, where the sediment is muddy, Rhizophora occurs. Nypa palms appear where the sediment is highly influenced by fresh water. In the mangrove area at the Sembilang peninsula, approximately 200 to 500 meter landward of the beach, there are 4 000 ha of shrimp ponds. The coastline of the area faces directly Bangka Strait and the East China Sea. The area is characterized by a monsoon climate. The annual rainfall is 2 000 – 2 500 mm. The maximum rainfall occurs in November and December (260 – 275 mm per month) and the minimum in July and August (140 – 200 mm per month). We have distinguished a wet season from October to April and a dry season from May to September. Daily temperature ranges between 20 – 32 o C and the humidity varies from 70 to 90 percent (Source: Stasiun Klimatologi Klas I Kenten Palembang). The sediment of the tidal flats ranges from fine to medium sands to very soft mud with high organic content. Mostly, the sediment is acidic and the C:N ratio is high (Soeroyo & Suryaso 1999). There is one high tide and one low tide daily. The tide reaches a maximum height of 4 meters above low-tide level. Further, the tide

is also influenced by the monsoon. During the rainy season, the height of the high tide will be up to 1-2 m higher. Also low tide levels are higher in that period, meaning that the tidal flats emerge slowly (Tide table from Daftar pasang surut Sungsang, Computer program TideWizard, and field observation). Salinity measured at low tide was 12-13 in the wet season and 17-18 in the dry season (Purwoko, unpublished observations). 2.2. Description of sampling stations We selected 6 sampling stations (Fig. 1) with varied characteristics: 1. Station 1: Tj. (Tanjung) Carat (S: 20 16.324' & E: 1040 55.117' by Gekko 202 Garmin GPS). Located in the estuary of the Musi river; the sediment is sandy, and the station is strongly influenced by fresh water. 2. Station 2: S. (Sungai) Bungin (S: 20 14.955' & E: 1040 50.714'). Located in the estuary of the Musibanyuassin river. The surface layer of the sediment is soft and high in organic matter and has a thickness of more than 1 meter. On the sediment young Avicennia occur. 3. Station 3: Solok Buntu (S: 20 11.063' & E: 1040 54.764'). Located near the estuary of the Musibanyuasin river. The surface layer of the sediment is muddy, and reaches 40 cm depth. Mangrove vegetation consists mainly of Avicennia. 4. Station 4: S. Barong Kecil (S: 20 9.872' & E: 1040 54.587'). The site is near the minor estuary of S. Barong Kecil. The sediment is muddy and the depth of this layer reaches 50 cm. 5. Station 5: S. Siput (S: 20 5.824' & E: 1040 54.102'). The sediment is muddy and the soft layer reaches 40 cm; the adjacent mangrove vegetation is mostly Avicennia; close to the site young trees grow. 6. Station 6: S. Dinding (S: 20 1.924' & E: 1040 50.573'). The sediment is muddy and contains many dead shells. The depth of the soft layer reaches 40 cm. The site is facing directly the

Page 224: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

217

South China Sea, so it is exposed to high waves. 2.3. Sampling procedure Each station consisted of 3 parallel line transects each consisting of 7 sampling points. The distance between the line transects varied from 5 to 10 m; the distance between the sampling points was randomly chosen and varied between 3 and 20 m. The first core sample of the first transect was taken at the lowest water level and the second till the seventh one were at further distance towards the mangrove. In the field we could not distinguish beforehand between areas of normal density of animals and areas of extremely high density (e.g., beds of bivalve filter feeders). Hence, we could not stratify our samples accordingly. Samples were taken with a circular corer. Core diameter was 15 cm and the sampling depth 30 cm. At each sampling point 1 core sample was taken. The core samples directly were extracted by double layers of 1 mm sieve in the field and the animals were collected from the sieve by hand. Sometimes sampling was difficult. Rough seas caused difficulty approaching the sampling sites, whereas big waves disturbed sampling activities, especially during sieving of the core samples. This may have reduced to some extent the number of animals obtained. To overcome this problem, on some occasions intact core samples have been broken into pieces before sieving and animals were collected directly from the core when we saw them, especially worms. Macrobenthic fauna collected was preserved in 70 % alcohol mixed with 3 % of formalin. Sampling activities took place during low tide at Sungsang determined by a tide table (TideWizard) in March, May, June, July and August 2004. The animal fauna in the samples were identified to the lowest taxonomic level possible (Dharma 1988, 1992); Computer Program: Poly Key), counted and ash-free dry weight (afdw) (Winberg & Duncan 1971) was measured at the laboratory at Palembang. The statistical calculations were carried out by the Statistics 7 Program. 2.4. Reanalysis of literature data

We used the data set . collected by Ricciardi and Bourget (Ricciardi & Bourget 1999) and kindly made available by Dr. A. Ricciardi as a starting point for our analysis. This data set comprises 245 soft-bottom localities. To these soft-bottom data we added our own data, in which we treated each of our sampling stations as one locality, resulting in 6 more locations. Finally, we added the data from Piersma et al. (1993: Table 1) excluding those also reported by Ricciardi & Bourget (Ricciardi & Bourget 1999). In total our data set consisted of 268 biomass values from soft-bottom tidal flats in tropical, subtropical, temperate and polar areas. To obtain normality we converted our biomass data by log 10(afdw + 0.1). The Statistics 7 program was used for all data analyses. 3. Results

3.1. Biomass at Sembilang

(about here Tables 1 and 2)

Tables 1 and 2 (see also Figure 1) summarize our results for the macrobenthic biomass of the intertidal area at Sembilang. The animal biomass averaged over the 6 stations and the 5 months of sampling at Sembilang amounted to 3.62 g afdw m-2. The highest biomass of macrobenthic animals occurred at station 4 in June and July 2004 and was caused mainly by a high biomass small individuals of the fast-growing bivalve Anadara granosa (Broom 1982). The biomass at station 4 (afdw: 14.09 g m-2) was significantly higher than those at the other stations (ANOVA: p<0.05). Most of the biomass derived from bivalves. Other animals, such as gastropods, crabs and polychaetes reached their highest biomass level in June (ANOVA: p<0.05), July and May 2004 (ANOVA: p<0.05), respectively (Table 2). Taxa influenced by place (stations) and time (months) were Tellina remies, T. timorensis, Clithon oualaniensis, Nereididae, Maldanidae and Lumbrineridae. Their biomasses were significantly different between stations and between months, as indicated by the different small letter (in brackets) in Tables 1 and 2.

Page 225: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

218

3.2. Biomass – latitude relationship The analysis of the Piersma et al. (1993) data set while keeping apart data from northern and southern latitudes, did result in positive correlations for latitude and biomass but none was significant (regression analysis: North: R2 = 0.15, p = 0.12; South: R2 = 0.55, p = 0.26). The outcome improved when using absolute values for latitude but also was not significant (R2 = 0.16, p = 0.07). When we analysed our main data set of biomasses while keeping apart data from northern and southern latitudes, the t-test and regression analysis showed that the mean values of macrobenthic biomass in tropical and non-tropical areas were not significantly different for both northern (R2 = 0.002, p = 0.60; t-test p = 0.13) and southern (R2 =0.09, p = 0.62; t-test p = 0.32) latitudes. However, when we classified our data without differentiating between southern and northern latitudes, Fig. 2 suggests that average biomasses differs by latitude. Indeed, the mean biomass of tropical (defined as between the Tropic of Cancer at 23 o N and the Tropic of Capricorn at 23 o S) areas was significantly (t-test, p = 0.01) different from the biomass of non-tropical areas. (About here Figure 2) Further regression analysis indicated that the hypothesis of a linear relationship was rejected (R2 = 0.00, p = 0.74). The best fitting the relationship between biomass of macrobenthic fauna and latitude zone was obtained by polynomial regression (R2 = 0.032, p = 0.003). The equation of the regression is log 10(afdw + 0.1) = 0.2846 + 0.0241x - 0.0003x2, where x is latitude values (Figure 3). (About here Figure 3) As may be expected, in our large data set water temperature is coupled significantly to latitude (R2 = 0.78, p = 0.00). Biomass (as afdw) shows a weak negative linear correlation with water temperature (R2 = 0.011, p = 0.097) with zero biomass at about 37 o Celsius.

4. Discussion 4.1. Biomass per station at Sembilang The highest biomass occurred at station 4 which differed significantly from all other stations with by far the highest value (14.09 g m2) of ash-free dry weight of macrobenthic animals. This is mainly the result of the abundant occurrence of the bivalve Anadara granosa, which was rare or lacking at the other stations. A. granosa is a fast-growing species occurring in dense beds. We encountered especially 1 – 1.5 cm sized individuals which according to (ref.) are less than 1 year old. However, their growth is not fast enough to explain their occurrence in our samples in June without a previous occurrence in May. Apparently, we have missed the A. granosa bed when sampling in May. This is assumption is supported by the strongly clustered distribution we observed in June. When we omitted the biomass values of Anadara, statistical analysis showed that there were no significant differences among the 6 stations for the total biomass of the macrobenthic fauna. Polychaete biomass was significantly different between stations. The highest biomass value was found at station 4. Further analysis showed that the polychaete families Nereididae, Maldanidae and Lumbrineridae each had their highest biomass at different stations. The highest biomass per polychaete taxon, such as Nereididae, Maldanidae and Lumbrineridae, occurred at station 2, station 6 and station 5, respectively. 4.2. Biomass per month at Sembilang There was also an influence of the month of sampling on the biomass of the macrobenthic fauna at Sembilang peninsula. Biomass of the total macrobenthic fauna was not significantly different over the 5 months sampling period. However, bivalves, gastropods and polychaetes showed significant differences. The highest bivalve and gastropod biomass was found in June, and the highest polychaete biomass in May 2004. Biomass of Tellina remies was lowest in August whereas biomass of T. timorensis was highest in July and August 2004. Omitting the Anadara biomass value did not influence the ANOVA result for bivalves and total macrobenthic fauna

Page 226: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

219

biomass, and there was no time influence for the Sembilang peninsula tidal flat. Maldanidae and Lumbrineridae had their highest biomass value in May but Nereididae in August 2004. 4.3. Comparison of Sembilang biomass with other studies We found an average biomass over all stations and all months of 3.62 g ash-free dry weight m-2. Biomass value ranged from 2.95 to 36.31 g afdw at east Java (Erftemeijer & Swennen 1990) and was 2.89 g at central Java (Warwick & Ruswahyuni 1987). Broom (1981) reported that the biomass value at a Malaysian mudflat was 23.45 g afdw, so the Sembilang biomass value was almost the same as the Java values but it was lower than the Malaysian one. Parulekar et al.(1980) reported that the biomass of the macrobenthic fauna at a tropical estuary in India was 54.2 g m-2 wet weight. However, when it was converted to afdw (Ricciardi & Bourget 1998), it was 3.34 g afdw m-2, being very close to our Sembilang value. In Mauritania (Africa, 200 latitude), the biomass was much higher (17.00 g ash-free dry weight m-2) (Wolff et al. 1993). Ricciardi and Bourget (Ricciardi & Bourget 1999) reviewed a large number of intertidal biomass studies and found for tropical sedimentary shores average biomasses of about 7.00 g m-2 ash-free dry weight (0 – 20o N) and about 8.50 g m-2 (0 - 20o S). We conclude that the value for Sembilang peninsula is on the low side. Ricciardi and Bourget (1999) suggest also that macrobenthic biomass at soft-sediment shores may be lowered by very muddy sediment, a high slope of the shore, high exposure to waves, and small waves. The Sembilang tidal flats indeed are very muddy, but neither show a high slope of the shore, nor, in most cases, they are highly exposed to waves. The same authors also suggest a relationship with latitude (see below). They do not consider biological factors which of course may be connected to the physical factors mentioned. Exclusion experiments showed that at Sembilang shorebirds were important predators of the macrobenthic community. Resident birds, mainly storks and herons, prey especially on fish, whereas migratory birds, mainly waders, feed on macrobenthic fauna;

however, during long drought periods local ducks also searched for food at the Sembilang peninsula tidal flats. However, shorebird numbers at Sembilang are not exceptionally high (Purwoko, in prep.). Although we have no records on predation by fish, there is evidence that some species prey on macrobenthic fauna. During field observations at low tide, we observed tracks of fish disturbance left on the substrate. Humans can be important predators as well. However, at Sembilang before 2005 Anadara and other shellfish were not harvested. On the other hand shrimps were caught with small trawls and standing nets. These activities may have caused disturbance of the bottom leading to lower macrobenthic biomass. 4.4. Biomass - latitude study The mean biomass of macrobenthic fauna in the tropics was shown to be significantly lower compared to the mean biomass of macrobenthic fauna at non-tropical latitudes. However, we could not demonstrate a significant linear relationship between benthic biomass and latitude. Instead we established a significant second-order relationship with the maximum biomass predicted at 40.17 degree of latitudes. This is in line with our earlier conclusion of low biomasses in the tropics, but we find it difficult to suggest a biological explanation for this relationship. Tropical beaches and tidal flats are subject to climatic and physical disturbance (review by (Alongi 1990). Large rain fall, storms, sun exposure, and high temperatures disturb the macrobenthic habitat. However, physical disturbance is also found at higher latitudes and it is not immediately clear why it should have less effect on benthic biomass. Maybe, our latitude – biomass relationship is the result of two stress gradients. One gradient might be related to the occurrence of freezing temperatures and ice scour, leading to a negative relationship between latitude and biomass from the poles towards temperate latitudes. The other one could be related to increasing temperatures leading to increasing heat stress on intertidal flats going from temperate latitudes to the equator.

Page 227: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

220

Other possible explanations are ecological in nature. Warwick & Ruswahyuni (1987) speculated that in the tropics less of the primary production by phytoplankton reaches the bottom because production is more or less continuous and in phase with zooplankton grazers. On the other hand, they argue, in temperate regions primary production is highly seasonal and much of the spring bloom settles to the bottom before the zooplankton population has built up sufficiently to graze it. This might be true at Sembilang as well. Phytoplankton occurs in low densities at Sembilang (Sutomo 1999), possibly because of the high abundance of zooplankton at the same time (Romimohtarto 1999). However, data are lacking to test the generality of this explanation. The same applies to the suggestion that tropical flats are subjected to higher exploitation pressures by humans. One of us

(WJW) has seen tidal flats all over the world and observed that in many developing tropical countries coastal ecosystems are exploited very intensively. Although we cannot support this observation quantitatively, it might mean that tropical mudflats on average are exploited more heavily than non-tropical flats. Acknowledgments First AP would like to thank to Sriwijaya University, Palembang, Indonesia for sponsoring his study. AP thanks Ake and Usup for their assistance during sampling and Ismail for acting as a speedboat driver. He is also glad that Edi P. assisted at measuring the afdw. Finally, special thanks are addressed to Dr. Harry ten Hove who guided us in identifying the macrobenthic fauna and Dr. Anthony Ricciardi who supplied his world-wide data set on macrobenthic biomasses.

References

Alongi, M. D. (1990). "The ecology of tropical soft-bottom benthic ecosystem." Oceanographic Marine Biology 28: 381-496.

Broom, M. J. (1981). "Structure and seasonality in a Malaysian mudflat community." Estuarine coastal shelf science 15: 135-150.

--- (1982). "Structure and seasonality in a Malaysian mudflat community." Estuarine, Coastal and Shelf Science 15(2): 135-150.

Dharma, B. (1988). Siput dan kerang Indonesia (Indonesian Shell). Jakarta., PT. Sarana Graha.: 111 pages.

--- (1992). Siput dan kerang Indonesia (Indonesian Shell II). Wiesbaden, Germany., Verlag Chista Hemmen: 135 pages.

Erftemeijer, P. and C. Swennen (1990). "Density and biomass of macrobenthic fauna of some intertidal areas in Java, Indonesia." Wallaceana 59-60: 1-6.

Kalejta, B. and P. A. R. Hockey (1991). "Distribution, abundance and productivity of benthic invertebrates at the Berg River estuary, South Africa." Estuarine Coastal and Shelf Science 33(2): 175-191.

Parulekar, H. A., V. K. Dhargalkar and Y. S. S. Singbal (1980). ""Benthic Studies in Goa

Estuaries: Part III - Annual Cycle of Macrofaunal Distribution , Production and Trophic." Indian Journal of Marine Science 9: 189-200.

Piersma, T., de Goeij, P. and Tulp, I. (1993). "An evaluation of intertidal feeding habitats from a shorebird perspective: Towards relevant comparasions between temperate and tropical mudflats." Netherlands Journal of Sea Research 31(4): 503-512.

Rehm, P., Thatje, S., Siegel M. U., and Brandt A. (2006). "Comparation and distribution of the paracarid crustacean fauna along a latitudinal transect of Victoria Land (Ross Sea, Antartica) with special emphasis on the Cumacea." Polar Biol: 11 pages.

Ricciardi, A. and E. Bourget (1998). "Weight-to-weight conversion factors for marine benthic macroinvertebrates." Marine Ecology Progress Series (MEPS) 163: 245-251.

--- (1999). "Global patterns of macroinvertebrate biomass in marine intertidal communities." Marine Ecology Progress Series (MEPS) 185: 21 -35.

Riese, K., E. Herre and M. Srurm (1994). "Biomass and abundace of macrofauna in intertidal sediments of Koeningshafen in the

Page 228: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

221

Northern Wadden Sea." Helgoleander Meeresuntersuchungen 48(2-3): 201 - 215.

Romimohtarto, K. (1999). Komposisi dan sebaran zooplankton (in Indonesian). Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. K. Romimohtarto, Djamali, A. and Soeroyo (editors). Jakarta, Pusat Penelitian dan Pengembangan Oseanologi-LIPI (Lembaga Ilmu Pengetahun Indonesia/Indonesian Scientific Institute): 37-54.

Soeroyo and Suryaso (1999). Sifat-sifat kimia tanah mangrove (in Indonesian). Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. K. Romimohtarto, Djamali, A. and Soeroyo (editors). Jakarta, Pusat Penelitian dan Pengembangan Oseanologi-LIPI (Lembaga Ilmu Pengetahuan Indonesia/Indonesian Scientific Institute): 15-20.

Sutomo (1999). Kondisi korofil dan seston (in Indonesian). Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. K. Romimohtarto, Djamali, A. and Soeroyo (editors). Jakarta, Pusat Penelitian dan Pengembangan Oseanologi-LIPI (Lembaga Ilmu Pengetahuan Indonesia/Indonesian Scientific Institute): 29-36.

Warwick, R. M. and Ruswahyuni (1987). "Comparative study of the structure of some tropical and temperate marine soft-bottom macrobenthic communities." Marine Biology V95(4): 641-649.

Winberg, G. G. and A. Duncan (1971). Methods for the estimation of production of aquatic animals. London and New York, Academic press: 174.

Wolff, W. J., G. A. Duiven, P. Duiven, P. Esselink, A. Gueye, A. Meijboom, G. Moerland and J. Zegers (1993). "Biomass of macrobenthic tidal flat of the Banc d'Arguin, Mauritania." Hydrobiologia 258: 151-163.

Page 229: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

222

Average AFDW (gram/m2 monthly) at Sembilang peninsula Average

Species Station 1 Station 2 Station 3 Station 4 Station 5 Station 6 Station

1 Bivalves Anadara granosa 0.0431 0.0000 1.1113 12.2685 0.0000 0.2047 2.2713

2 Hecuba scortum 0.0122 0.0098 0.0391 0.0095 0.0289 0.0046 0.0173

3 Solen sp. 0.1533 0.0000 0.0000 0.0000 0.0000 0.0000 0.0256

4 Tellina remies 0.0210(a) 0.0892(a) 0.5047(b) 0.8736(c) 0.6869(bc) 0.2765(a) 0.4087 5 Tellina timorensis 0.2631(a) 1.8528(b) 0.0302(a) 0.0841(a) 0.1461(a) 0.0815(a) 0.4096

Total Bivalves 0.4927(a) 1.9518(a) 1.6852(a) 13.2357(b) 0.8619(a) 0.5673(a) 3.1324 Gastropods 1 Clithon oualaniensis 0.0350(a) 0.1648(b) 0.0114(a) 0.0734(a) 0.0177(a) 0.0430(a) 0.0575 2 Littorina melanostoma 0.0000 0.0004 0.0000 0.0002 0.0000 0.0005 0.0002 3 Nassa serta 0.0517 0.0637 0.0333 0.0636 0.0337 0.0135 0.0432 4 Thais buccinea 0.0111 0.0151 0.0306 0.0096 0.0038 0.0298 0.0167 Total Gastropods 0.0978(bc) 0.2440(c) 0.0753(bc) 0.1467(c) 0.0551(a) 0.0868(bc) 0.1176 Decapods (Crabs) 1 Ocypodidae 0.0231 0.0551 0.2012 0.0463 0.2535 0.0350 0.1024

2 Leucociidae 0.0000 0.0116 0.0000 0.0000 0.0493 0.0000 0.0101

Total crabs 0.0231 0.0667 0.2012 0.0463 0.3028 0.0350 0.1125 Polychaetes 1 Nereididae 0.1111(b) 0.1182(b) 0.0609(a) 0.0654(a) 0.0685(a) 0.0721(a) 0.0827 2 Maldanidae 0.0040(a) 0.0103(bc) 0.0060(ab) 0.0250(c) 0.0153(c) 0.0277(d) 0.0147 3 Lumbrineridae 0.0079(a) 0.0230(ab) 0.0288(ab) 0.0553(b) 0.1133(c) 0.0291(ab) 0.0429 4 Capitellidae 0.0027 0.0127 0.0034 0.0071 0.0064 0.0092 0.0069 5 Sternaspidae 0.0124 0.0111 0.0527 0.5013 0.0042 0.0075 0.0982

6 Unidentified worms 0.0085 0.0082 0.0142 0.0089 0.0000 0.0240 0.0106 Total Worms 0.1465(a) 0.1835(a) 0.1660(a) 0.6629(b) 0.2077(a) 0.1698(a) 0.2561 Total all species 0.7601(a) 2.4460(a) 2.1278(a) 14.0915(b) 1.4275(a) 0.8590(a) 3.62

Table 1. The average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna at different stations (see Fig. 1) at

Sembilang peninsula from March to August 2004. Different letters (in brackets) after biomass values in the same row (species) indicate significant differences (ANOVA: p<0.05).

Average afdw (gram/m2) at Sembilang peninsula Average

Species March May June July August Month Bivalves

1 Anadara granosa 0.1706 0.0000 7.7037 3.4820 0.0000 2.2713 2 Hecuba scortum 0.0224 0.0067 0.0269 0.0115 0.0191 0.0173 3 Solen sp. 0.0000 0.0000 0.1278 0.0000 0.0000 0.0256 4 Tellina remies 0.5746 (b) 0.5163 (b) 0.4092 (b) 0.3420 (ba) 0.2011 (a) 0.4087 5 Tellina timorensis 0.0629 (a) 0.1427 (a) 0.1996 (a) 0.8470 (b) 0.7960 (b) 0.4096 Total bivalves 0.8305 (a) 0.6657 (a) 8.4672 (b) 4.6824(b) 1.0163 (a) 3.1324

Gastropods 1 Clithon oualaniensis 0.0224 (a) 0.0673 (ab) 0.0673 (ab) 0.1115 (b) 0.0190 (a) 0.0575 2 Littorina melanostoma 0.0000 0.0000 0.0009 0.0000 0.0000 0.0002 3 Nassa serta 0.0853 0.0269 0.0853 0.0075 0.0112 0.0433 4 Thais buccinea 0.0067 0.0125 0.0375 0.0193 0.0072 0.0167 Total gastropods 0.1145 (a) 0.1068 (a) 0.1911(b) 0.1383 (ab) 0.0375 (a) 0.1176

Decapods (Crabs) 1 Ocypodidae 0.0584 0.0147 0.0640 0.1446 0.0771 0.0718 2 Leucociidae 0.0000 0.0571 0.0213 0.0579 0.0675 0.0408

Total crabs 0.0584 0.0718 0.0853 0.2025 0.1446 0.1126

Page 230: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

223

Polychaetes

1 Nereididae 0.0224 (a) 0.0673 (b) 0.0494 (b) 0.1020 (c) 0.1723 (d) 0.0827 2 Maldanidae 0.0013 (a) 0.0269 (bc) 0.0269 (c) 0.0062 (a) 0.0121 (b) 0.0147 3 Lumbrineridae 0.0180 (a) 0.0629 (b) 0.0269 (a) 0.0594 (b) 0.0474 (b) 0.0429 4 Capitellidae 0.0011 0.0028 0.0180 0.0029 0.0099 0.0069 5 Sternaspidae 0.0016 0.4169 0.0314 0.0253 0.0158 0.0982 6 Unidentified worms 0.0131 0.0196 0.0135 0.0018 0.0053 0.0106 Total worms 0.0575 (a) 0.5964 (b) 0.1661 (a) 0.1976 (a) 0.2628 (a) 0.2561

Total all species 1.0609 1.4408 8.9097 5.2208 1.4612 3.62 Table 2: Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna in different months at Sembilang peninsula from March to August 2004. Different letters (in brackets) after biomass values in the same row (species) indicate significant

differences (ANOVA: p<0.05).

Figure 1. Sampling stations along the coast of Sembilang peninsula. Each station consists of three transects each with seven sampling points.

Page 231: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

224

0

2

4

6

8

10

12

14

16

18

0-9.9 10-19.9 20-29.9 30-39.9 40-49.9 50-59.9 > 60

Latitude class

Ave

rag

e b

iom

ass

Figure 2. Average biomass (g afdw m-2) per 10 o latitude, northern en southern latitudes combined.

-10 0 10 20 30 40 50 60 70 80

Latitude

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Log

(afd

w+0

.1)

Figure 3. The relationship between latitude and biomass. The relationship shown

fits the equation y = 0.2846 + 0.0241x - 0.0003x2 (0.95 conf. int.) with y = log10 (afdw + 0.1)

Page 232: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

225

Density and biomass of the macrobenthic fauna of the intertidal area in Sembilang national park, South Sumatra, Indonesia

Agus Purwoko a, b & Wim J. Wolff a, *

a Dept. of Marine Benthic Ecology and Evolution, University of Groningen, P.O. Box 14, 9750 AA Haren, The

Netherlands E-mail addresses: [email protected]

b Dept. of Biology, Sriwijaya University, Palembang, Indonesia E-mail addresses: [email protected]

ABSTRACT Intertidal area of the Sembilang peninsula of South Sumatra, Indonesia had been studied in 2004. Three replicated transect lines were located at each of six sampling stations. One transect line consisted of randomly 7 cores sampling monthly. Macrobenthic fauna was identified at the lowest taxonomical level, and counted. Biomass was measured as ash free dry weight. The data compiled by applying statistical analyzing. Stations and months variables showed significant difference. The most abundance of macrobenthic animals found was bivalve Tellina timorensis. Biomass of the total macrobenthic fauna was not significantly different over the 5 months sampling period. However, Anadara showed the highest biomass value. Introduction Intertidal areas on sedimentary shores are places of dynamic biological processes. In many areas the intertidal is important for sustaining the economical and ecological functions of coastal areas for plants, animals, and local people. One of the ecological indicators of the state of the intertidal ecosystem is animal macrobenthic biomass. Observations on the biomass of the intertidal areas on sedimentary shores have been made mostly in temperate and subtropical regions (e.g., Piersma et al., 1993, Ricciardi & Bourget, 1999). Alongi (1990) also reports that there is little information on intertidal macrobenthic fauna in the tropics; most of the studies of tropical macrobenthos have been carried out in lagoons and estuaries in India. More recent studies on macrobenthos of tropical tidal flats are among others those by Wolff et al. (1993), Dittmann (1995 & 2000), Pepping et al. (1999), De Boer (2000), and

Piersma et al. (2005) at Mauritania, Northeast Australia, Western Australia, Mozambique, Western Australia respectively. The Sembilang tidal flat area in South-east Sumatera, Indonesia, is an important stop-over site for migrating shorebirds. The area supplies food for resident and migrating birds. Starting in 1988, Danish and Netherlands researchers collaborated with the Environmental Study Center staff of Sriwijaya University in exploring the Sembilang coastal area, censusing the birds population and measuring socio-economical aspects (Danielsen, 1990 & Djamali, 1999). Some researchers, such as Simanjuntak (1999), Soeroyo (1999), and Suryaso (1999) from the Indonesian National Institute of Oceanography (Lembaga Oceanografi Nasional –Lembaga Ilmu Pengetahuan Indonesia) determined the bio-geophysical and chemical characteristics of mangrove areas of Sembilang and Bangka strait. The first inventory of the macrobenthic fauna of the Sembilang tidal flats was conducted by Purwoko (1992). The same author (1996) also studied the macrobenthos biomass at S. (Sungai) Tengkorak, a river mouth at the Sembilang peninsula. The objective of our quantification of the macrobenthos biomass is to determine the potential food supply for migrating shorebirds and any other predators (e.g., fish and fishermen) at the shores of Sembilang National Park. Material and Methods Area description

The Sembilang peninsula (Fig. 1: 10 59’ to 20 15’ S and 1040 45’ to 1040 53’ E) is located at the Eastern coast of South Sumatra, Indonesia, and it is part of the Sembilang Nasional Park. It is influenced by the Musi River, the

Musibanyuasin River and some smaller tributaries. Originally, the land was covered by mangrove and swampy forest. The most seaward belt of the mangrove vegetation is still formed by Sonneratia and Avicennia

Page 233: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

226

where the soil is sandy; however, where the soil is muddy, Rhizophora occurs. Nypa palms appear where the soil is highly influenced by fresh water. In the mangrove area at the Sembilang peninsula, approximately 200 to 500 meter from the beach, there are 4,000 ha of shrimp ponds. The coastline of the area faces directly Bangka strait and the East China Sea. Two villages are nearby. Sungsang village is at the southern part and Sembilang village at the northern end of the peninsula. To reach Sungsang from the capital city (Palembang) takes 2.5 hours by boats (90 km). The area is characterized by a monsoon climate. The annual rainfall is 2,000 -2,500 mm. The maximum rainfall happens in November and December (260 – 275 mm per month) and the minimum in July and August (140 – 200 mm per month). We have distinguished a wet season from October to April and a dry season from May to September. Daily temperature ranges between 20 – 32 o C and the humidity varies from 70 to 90 percent (Source: Stasiun Klimatologi Klas I Kenten Pelambang). The sediment of the tidal flats is varied. It ranges from fine to medium sands to very soft mud with high organic content. Mostly, the soil is acid and the C:N ratio is high (Suryaso, 1999). There is one high tide and one low tide daily. The tide reaches a maximum height of 4 meters above low-tide level. Further, the tide is also influenced by the monsoon. During the rainy season, the height of the high tide will be up to 1-2 m higher. Also low tide levels are higher in that period, meaning that the tidal flats emerge slowly (Daftar pasang surut Sungsang, and computer program: TideWizard). Description of sampling stations We selected 6 sampling stations (Fig. 1) with varied characteristics: 1. Station 1: Tj. (Tanjung) Carat (S: 20 16.324' & E: 1040 55.117' by Gekko 202 Garmin GPS). Located at the estuary of the Musi river; the soil is sandy, and the station is strongly influenced by fresh water.

2. Station 2: S. (Sungai) Bungin (S: 20 14.955' & E: 1040 50.714'). Located in the estuary of the Musibanyuassin river. The surface layer of the sediment is soft and high in organic matter and has a thickness of more than 1 meter. On the sediment young Avicennia occur. 3. Station 3: Solok Buntu (S: 20 11.063' & E: 1040 54.764'). Located near the estuary of the Musibanyuasin river. The surface layer of the soil is muddy, and reaches 40 cm depth. Mangrove vegetation consists mainly of prepat ( Avicennia). At sea nearby, there are some pole houses. 4. Station 4: S. Barong Kecil (S: 20 9.872' & E: 1040 54.587'). The site is near the minor estuary of S. Barong Kecil. The soil is muddy and the depth of this layer reaches 50 cm. 5. Station 5: S. Siput (S: 20 5.824' & E: 1040 54.102'). The soil is muddy and the soft layer reaches 40 cm; the adjacent mangrove vegetation is mostly Avicennia; close to the site young trees grow. 6. Station 6: S. Dinding (S: 20 1.924' & E: 1040 51.838'). The sediment is muddy and contains many dead shells. The depth of the soft layer reaches 40 cm. The site is facing directly the South China Sea. Salinity at these stations was measured in 2005 and 2006 by Argento nitrite titration of field samples. The results show that the intertidal area of Sembilang peninsula is a brackish water environment with major differences between the wet season (June) and the dry season (October) (Table 1). The different stations show minor differences of salinity, both in the water and in the sediment. Sampling procedure Each station consisted of 3 parallel line transects each consisting of 7 sampling points. The distance between the line transects was 5 m; the distance between the sampling points was randomly chosen and varied between 3 and 20 m. The first core sample of the first transect was taken at the lowest water level and the second till the seventh one were at further distance towards the mangrove. Samples were taken with a circular corer. Core diameter was 15 cm and the sampling depth 30 cm. At each sampling point 1 core sample was

Page 234: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

227

taken. The core samples directly were extracted by a 1mm sieve in the field and the animals were collected from the sieve by hand. Macrobenthic fauna collected was preserved in 70 % alcohol mixed with 3 % of formalin. Sampling activities took place during low tide determined by a tide table at Sungsang mouth (TideWizard) in March, May, June, July and August 2004. The animal fauna in the samples were identified to the lowest taxonomic level possible (Dharma 1988; Dharma 1992; Computer Program: Poly Key), counted and Ash Free Dry Weight (AFDW) (Winberg 1971) was measured at the laboratory at Palembang. The data on the weather were collected from the weather station at Palembang. The water quality data were gathered by other researchers (Simanjuntak 1999). The statistical calculations were carried out by the Statistics 7 Program. Results Species composition and density We caught 17 taxa of macrobenthic fauna at the Sembilang peninsula. The composition of the fauna and the average density of the animals are shown in Table 2. The most abundant macrobenthic species are the bivalve Tellina timorensis and the gastropod Clinthon oualaniensis. T. timorensis comprises 27.9 percent of the total macrobenthic fauna; its density is 1737.4 individuals per m2. Comparison of stations The density of animals at the 6 stations is significantly different (p < 0.05). The highest density of macrobenthic animals occurred at station 2 in July and August 2004 (Figure 2, Table 3). The distribution of the macrobenthic fauna is contagious (Morisita index P = 2.250). Figure 3 show the percentages of the main taxonomic groups at the stations at the Sembilang peninsula. Polychaete worms were the most abundant animals at 5 out of 6 sampling stations, and the bivalves usually were second. Table 4a and 4b illustrates the distribution of biomass as ash free dry weight (afdw).

The highest density of macrobenthic fauna, mainly bivalves, occurred at station 2, however, the highest biomass occurred at station 4 (afdw: 14.1 g m-2) which is significantly higher than at the other stations (p<0.05). Comparison over time The highest standing biomass of macrobenthic fauna occurred in June, but it was not significantly different from that in other months in 2004. Although the highest biomass occurred in June, the highest number of animals occurred in July 2004. Most of the animal biomass derived from bivalves. Other animals reached their highest biomass level at a different time, such as gastropods in March, crabs in August and worms in May 2004 (table 4b). Animals which are influenced by place (stations) and time (months) were Tellina remies, T. timorensis, Clithon oualaniensis, Nereididae, Maldanidae and Lumbrineridae. They were significantly different, as indicated by the different small letter (in brackets) in table 4a and 4b. Discussion Quality of the data Sometimes sampling was difficult. Rough seas sometimes caused difficulty approaching the sampling sites, whereas big waves disturbed sampling activities, especially during sieving of the core samples. This may have reduced the number of animals obtained. To overcome this problem, on some occasions intact core samples have been broken into pieces before sieving and animals were collected directly from the core when we saw them, especially worms. Due to the fine sieve (0.5 mm mesh) used to sieve sediment, worms broke into fragments. This may have had an impact on the number of worms counted. Comparison of densities per station The average densities of all species together over the entire period differed significantly between stations (Table 3). This was also true for the average densities of all species per month, except for March (Table 3). Species densities also differed. For example, Tellina timorensis and Clithon oualaniensis were found especially at station 2. Soft

Page 235: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

228

sediment and high organic matter content at station 2 might be favourable for these two species. Comparison of densities per month Table 3 shows that average total densities per station differed significantly (Duncan test).Wet (May and June 2004) and dry (July and August 2004) seasons were expected to be correlated with the density of macrobenthic fauna (sampling stations, ANOVA: p < 0.05); however, further tests (Duncan test and t-test) could not show a significant difference between wet and dry season in 2004. When we look further into the macrobenthic fauna at lower taxonomic level, we find that Tellina timorensis and Clithon oualaniensis show significantly different densities (Duncan test; p<0.05) in dry and wet seasons. They were more abundant in the wet season. It might be hypothesized that they settled more in that period because they require some resource connected to the lower salinity during the wet season. Comparison of densities with other studies The average density of macrobenthic species in our study site (6,219.6 individuals per m2) was lower than that (22,591 to 52,914 individuals per m2) at a mangrove area in Florida, USA (Sheridan 1997). However, the proportion of bivalves and worms (82%) was higher at Sembilang peninsula than that in the Zuari estuary in India (70%) (Parulekar et al., 1980). Comparison of biomass per station Although the highest density of macrobenthic fauna occurred at station 2, the highest biomass occurred at station 4. This station differed significantly from all other stations. Station 4 which had a low density (286.06 individuals per m2) compared to station 2 (474.61 individuals per m2), contained by far the highest value (14.1 g m2) of ash-free dry weight of macrobenthic animals. This is mainly the result of of the abundant occurrence of the bivalve Anadara granosa, which was rare or lacking at the other stations. The biomass of Anadara did not differ between our 6 stations significantly, but it influenced the result of of the statistical analysis. When we omitted the biomass values of Anadara, the statistical analysis showed that

although the highest biomass of bivalves occurred at station 2 (p<0.05), there were no significant differences among the 6 stations for the total biomass of the macrobenthic fauna. Other bivalves like Tellina remies and T. timorensis showed significant differences of biomass with biomass of T. remies highest at station 4 and T. timorensis at station 2. Like T. timorensis, Clithon had a high biomass at station 2. Polychaete biomass was significantly different between stations. The highest biomass value was found at station 4. Further analysis showed that the polychaete families Nereididae, Maldanidae and Lumbrineridae each had their highest biomass at different stations. The highest biomass of Nereididae, Maldanidae and Lumbrineridae occurred at station 2, station 6 and station 5, respectively. This may imply that the Nereididae preferred the habitat which is frequently influenced by fresh water, but that the others avoided that habitat. Comparison of biomass per month There was also an influence of the month of sampling on the biomass of the macrobenthic fauna at Sembilang peninsula. Biomass of the total macrobenthic fauna was not significantly different over the 5 months sampling period. However, bivalves, gastropods and worms showed significant differences. The highest biomasses of bivalves, gastropods and worms were found in July, June and May 2004, respectively. Biomass of Tellina remies was lowest in August whereas biomass of T. timorensis was highest in July and August 2004. Omitting the Anadara biomass value did not influence the analysis of biomass of bivalves, because of the same result of ANOVA. Maldanidae and Lumbrineridae had their highest biomass value in May but Nereididae in July. Comparison of biomass with other studies We found an average biomass over all stations and all months of 3.6 g ash-free dry weight m-

2. Parulekar et al. (1980) reported that the biomass of the macrobenthic fauna at a tropical estuary in India was 54.2 g m-2 (wet weight). However, when the value of ash-free dry weight was converted to wet weight (Ricciardi and Bourget, 1998), the Sembilang average biomass was 58.8 g m-2, being very

Page 236: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

229

close to the Indian value. Ricciardi and Bourget (1999) reviewed a large number of intertidal biomass studies and found for tropical sedimentary shores average biomasses of about 7.0 g m-2 ash-free dry weight (0 – 20o N) and about 8.5 g m-2 (0 - 20o S). We conclude that the value for Sembilang peninsula is on the low side. Relationship with environmental factors Intertidal benthic biomass is the result of recruitment, growth and mortality. Among the factors influencing these processes are water quality, sediment quality and biotic factors such as predation. Water quality includes salinity. High river discharges may impede settlement and growth of marine and many estuarine animals as well as cause high mortality because of too low salinities. Hence, it is hypothesized that biomass will decrease from the most seaward location (station 6) to the most riverward locations (stations 1 and 2). However, there is no clear trend in the salinity data (Table 1) as well as the biomass data (Table 4a); only station 4 has a significantly higher biomass whereas all other stations do not differ significantly. Simanjuntak (1999) reported that the wet season caused reduced salinity. This was confirmed by our study. The sediment at station 1 is sandy, whereas all other stations are very muddy. This difference does not result in a clear difference between these stations with regard to benthic biomass. With the exception of station 4, all stations have average biomasses not differing significantly. We have insufficient data on predation of benthic fauna. We have made bird censuses (Purwoko, in prep.) en we have observed that Anadara granosa is fished near station 4, 5 and 6 in 2005. We do not have any information on fish and other nekton feeding in the intertidal at high tide. Acknowledgements First I would like to thank to Sriwijaya University for sponsoring my study. Secondly, AP thanks to Ake for his assistance during sampling and Ismail for acting as a speedboat driver. I was also glad that Edi P. assisted me

on the measuring the AFDW. Finally, special thanks addressed to Harry ten Hove who guided me to identify the macrobenthic fauna. References Alongi, D. M., 1990. The ecology of tropical

soft-bottom benthic ecosystem. Oceanography and Marine Biology Annual Review 28: 381-496.

Danielsen, F. & W. Verheugt, 1990. Integrating conservation and land-use planning in the coastal region of South Sumatra. Asian Wetland Bureau-PHPA, Bogor, Indonesia,.

De Boer, W. F., 2000. Between the tides. The imact of human exploitation on an intertidal ecosystem, Mozambique. PhD thesis, University of Groningen.

Dharma, B., 1988. Siput dan kerang Indonesia (Indonesian Shells)., PT. Sarana Graha, Jakarta.

Dharma, B., 1992. Siput dan kerang Indonesia (Indonesian Shells II). Verlag Christa Hemmen. Wiesbaden, Germany.

Dittmann, S., 1995. Benthos structure on tropical tidal flats of Australia. Helgoländer Meeresuntersuchungen 49: 539-551.

Dittmann, S., 2000. Zonation of benthic communities in a tropical tidal flat of northeast Australia. Journal of Sea Research 43: 33-51.

Djamali, A. &. Sutomo, 1999. Kondisi sosial ekonomi budaya, sosial ekonomi budaya dan perikanan. In Romimohtarto K., A. Djamali, & Soeroyo (eds), Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. Pusat Penelitian dan Pengembangan Oseaneologi-LIPI, Jakarata.

Parulekar, A. H., V. K. Dhargalkar, & Y. S. S. Singbal, 1980. Benthic Studies in Goa Estuaries: Part III - Annual Cycle of Macrofaunal Distribution, Production and Trophic Relations. Indian Journal of Marine Science 9: 189-200.

Pepping, M., T. Piersma, G. Pearson & M. Lavaleye, 1999. Intertidal sediments and benthic animals of Roebuck Bay, Western Australia. NIOZ-Report 1999-3: 212 pp. Netherlnads Institute for Sea Research, Texel.

Piersma, T., P. de Goeij, & I. Tulp, 1993. An evaluation of intertidal feeding habitats from a shorebird perspective: Towards relevant comparisons between temperate and tropical

Page 237: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

230

mudflats. Netherlands Journal of Sea Research 31: 503-512.

Piersma, T., G. B. Pearson, R. Hickey & M. Lavalaye, 2005. The Long Mud. Benthos and shorebirds of the foreshore of Eighty-mile Beach, Western Australia. NIOZ-Report 2005-2: 218 pp. Royal Netherlands Institute for Sea Research, Texel.

Purwoko, A., 1992. The inventory of macrobenthos in S. Sembilang - Sungsang. Lembaga Penelitian Univ. Sriwijaya. Palembang, Indonesia.

Purwoko, A., 1996. The carrying capacity of intertidal area: a feeding habitat of the stop over of migratory birds in the Sembilang peninsula, South Sumatra, Indonesia. Environmental Science. Wageningen., Wageningen Agricultural University, Wageningen.

Ricciardi, A. & E. Bourget, 1999. Global patterns of macroinvertebrate biomass in marine intertidal communities. Marine Ecology Progress Series 185: 21-35.

Simanjuntak, M., 1999. Kondisi kimia oseanografi perairan hutan mangrove. In Romimohtarto K., A. Djamali, & Soeroyo (eds), Ekosistem perairan sungai Sembilang,

Musibanyuasin, Sumatera Selatan. Pusat Penelitian dan Pengembangan Oseaneologi-LIPI, Jakarata.

Soeroyo, 1999. Struktur, komposisi, zonasi dan produksi serasah mangrove. In Romimohtarto K., A. Djamali, & Soeroyo (eds), Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. Pusat Penelitian dan Pengembangan Oseaneologi-LIPI, Jakarata.

Suryaso, 1999. Peranan proses fisika dalam evolusi mangrove. In Romimohtarto K., A. Djamali, & Soeroyo (eds), Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. Pusat Penelitian dan Pengembangan Oseaneologi-LIPI, Jakarata.

Winberg, G. G. & A. Duncan, 1971. Methods for the estimation of production of aquatic animals. Academic press, London and New York.

Wolff, J. W., G. A. Duiven, P. Duiven, P. Esselink, A. Gueye, A. Meijboom, G. Moerland & J. Zegers, 1993. Biomass of macrobenthic tidal flat fauna of the Banc d'Arguin, Mauritania. Hydrobiologia 258: 151-163.

Page 238: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

231

Figure 1: Location of the 6 sampling stations at Sembilang peinsula. Source: Sutaryo et al. (2001)

Macrobenthic fauna at Sembilang peninsula in 2004

0.00

100.00

200.00

300.00

400.00

500.00

600.00

700.00

800.00

900.00

1000.00

Station 1 (Tj.Carat)

Station 2 (S.Bungin)

Station 3(Solok Buntu)

Station 4 (S.Barong Kecil)

Station 5 (S.Siput)

Station 6 (S.Dinding)

Stations

ind

ivid

ua

l/sq

ua

re m

tr

March

May

June

July

August

Figure 2: The density of macrobenthic fauna at 6 different sampling stations at Sembilang peninsula from March to August

2004.

Page 239: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

232

Figure 3: Distribution of main taxonomic groups of macrobenthic animals at Sembilang peninsula in 2004

Average Salinity (psu) June 2006 Oct. 2006 2005

Stations water water sediment Station 1: Tj. Carat 13.01 17.86 25.50 Station 2: S. Bungin 12.54 17.34 26.50

Station 3: Solok Buntu 12.33 18.12 26.33 Station 4: S. Barong K 12.43 18.54 26.50

Station 5: S. Siput 12.53 17.66 27.00 Station 6: S. Dinding 12.43 18.34 27.34

Table 1. The salinity of the intertidal area of Sembilang peninsula. Samples were taken at low tide.

Page 240: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

233

TAXA Animal density (2004)

Bivalves Indivual/m2 Percentage 1 Anadara granosa 21.54 0.3 2 Hecuba scortum 202.02 3.2 3 Solen sp. 48.48 0.8 4 Tellina remies 705.72 11.3 5 Tellina timorensis 1737.37 27.9

Total bivalves 2715.15 43.7 Gastropods

1 Clithon oualaniensis 870.03 14.0 2 Littorina melanostoma 16.16 0.3 3 Nassa serta 145.45 2.3 4 Thais buccinea. 70.03 1.1

Total gastropods 910.44 14.6

Crabs 1 Ocypodidae 148.15 2.4 2 Leucociidae 2.69 0.0

Total crabs 150.84 2.4

Worms 1 Nereididae 630.30 10.1 2 Maldanidae 595.29 9.6 3 Lumbrineridae 692.26 11.1 4 Capitellidae 258.59 4.2 5 Sternaspidae 439.06 7.1 6 Unidentified worms 355.56 5.7 Total worms 2443.10 39.3

Total macrobenthic

fauna 6219.53 100.0

Table 2 Average density of each macrobenthic taxon at Sembilang peninsula over all stations and months in 2004

Density macrobenthic fauna in 2004 (individuals/m2) Average

Stations March May June July August (station)

Station 1 (Tj. Carat) 204.71 226.26 (c) 183.16 (b) 145.45 (b) 142.76 (b) 180.47(c)

Station 2 (S. Bungin) 140.07 180.47 (c) 296.30 (a) 929.29 (a) 826.94 (a) 474.61(a)

Station 3 (Solok Buntu) 220.88 293.60(bc) 175.08 (b) 167.00 (b) 169.70 (b) 205.25(c)

Station 4 (S. Barong Kecil) 167.00 414.81 (b) 366.33 (a) 266.67 (b) 215.49 (b) 286.06(b)

Station 5 (S. Siput) 142.76 568.35 (a) 96.97 (b) 258.59 (b) 274.75 (b) 268.28(bc)

Station 6 (S. Dinding) 156.23 261.28 (c) 339.39 (a) 177.78 (b) 204.71(b) 227.88(bc)

Average (month) 171.94(c) 324.13(ab) 242.87(ac) 324.13(b) 305.72(ab)

Table 3: Average density of macrobenthic fauna at 6 stations and in 5 months at Sembilang peninsula. Different letters (in brackets) after density values in the same column (March to August, and Average months) indicate significantly different

values (Duncan test: p<0.05). Last row excluded from the columns. Different letters (in brackets) after density values in the last row indicate significant differences (p<0.05) between months. March values not significantly different.

Macrob. Species Average AFDW (gram/m2 monthly) at Sembilang peninsula Average

Bivalves Station 1 Station 2 Station 3 Station 4 Station 5 Station 6 Station 1 Anadara granosa 0.0431 0.0000 1.1113 12.2685 0.0000 0.2047 2.2713 2 Hecuba scortum 0.0122 0.0098 0.0391 0.0095 0.0289 0.0046 0.0173 3 Solen sp. 0.1533 0.0000 0.0000 0.0000 0.0000 0.0000 0.0256 4 Tellina remies 0.0210(c) 0.0892(c) 0.5047(b) 0.8736(a) 0.6869(ba) 0.2765(c) 0.4087 5 Tellina timorensis 0.2631(b) 1.8528(a) 0.0302(b) 0.0841(b) 0.1461(b) 0.0815(b) 0.4096 Total Bivalves 0.4927(b) 1.9518(b) 1.6852(b) 13.2357(a) 0.8619(b) 0.5673(b) 3.1324 Gastropods 1 Clithon oualaniensis 0.0350(b) 0.1648(a) 0.0114(b) 0.0734(b) 0.0177(b) 0.0430(b) 0.0575 2 Littorina melanostoma 0.0000 0.0004 0.0000 0.0002 0.0000 0.0005 0.0002

Page 241: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

234

3 Nassa serta. 0.0517 0.0637 0.0333 0.0636 0.0337 0.0135 0.0432 4 Thais buccinea. 0.0111 0.0151 0.0306 0.0096 0.0038 0.0298 0.0167 Total Gastropods 0.0978(bc) 0.2440(abc) 0.0753(a) 0.1467(ab) 0.0551(c) 0.0868(bc) 0.1176 Decapods (Crabs) 1 Ocypodidae 0.0231 0.0551 0.2012 0.0463 0.2535 0.0350 0.1024 2 Leucociidae 0.0000 0.0116 0.0000 0.0000 0.0493 0.0000 0.0101 Total crabs 0.0231 0.0667 0.2012 0.0463 0.3028 0.0350 0.1125 Worms 1 Nereididae 0.1111(a) 0.1182(a) 0.0609(b) 0.0654(b) 0.0685(b) 0.0721(b) 0.0827(b) 2 Maldanidae 0.0040(d) 0.0103(bc) 0.0060(dc) 0.0250(b) 0.0153(b) 0.0277(a) 0.0147 3 Lumbrineridae 0.0079(c) 0.0230(bc) 0.0288(bc) 0.0553(b) 0.1133(a) 0.0291(bc) 0.0429 4 Capitellidae 0.0027 0.0127 0.0034 0.0071 0.0064 0.0092 0.0069 5 Sternaspidae 0.0124 0.0111 0.0527 0.5013 0.0042 0.0075 0.0982 6 Unidentified worms 0.0085 0.0082 0.0142 0.0089 0.0000 0.0240 0.0106 Total Worms 0.1465(b) 0.1835(b) 0.1660(b) 0.6629(a) 0.2077(b) 0.1698(b) 0.2561

Total macrobenthic

Animals 0.7601(b) 2.4460(b) 2.1278(b) 14.0915(a) 1.4275(b) 0.8590(b) 3.62

Table 4a: The average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna at different locations at Sembilang peninsula from March to August 2004. Different letters (in brackets) after biomass values in the same row (species) indicate

significant differences (p<0.05)

Macrob. Species Average afdw (gram/m2) at Sembilang peninsula Average Bivalves March May June July August Month 1 Anadara granosa 0.1706 0.0000 7.7037 3.4820 0.0000 2.2713 2 Hecuba scortum 0.0224 0.0067 0.0269 0.0115 0.0191 0.0173 3 Solen sp. 0.0000 0.0000 0.1278 0.0000 0.0000 0.0256 4 Tellina remies 0.5746 (a) 0.5163 (a) 0.4092 (a) 0.3420 (ba) 0.2011 (b) 0.4087 5 Tellina timorensis 0.0629 (b) 0.1427 (b) 0.1996 (b) 0.8470 (a) 0.7960 (a) 0.4096 Total bivalves 0.8305 (b) 0.6657 (b) 8.4672 (ab) 4.6824(a) 1.0163 (b) 3.1324 Gastropods 1 Clithon oualaniensis 0.0224 (b) 0.0673 (ab) 0.0673 (ab) 0.1115 (a) 0.0190 (b) 0.0575 2 Littorina melanostoma 0.0000 0.0000 0.0009 0.0000 0.0000 0.0002 3 Nassa serta 0.0853 0.0269 0.0853 0.0075 0.0112 0.0433 4 Thais buccinea 0.0067 0.0125 0.0375 0.0193 0.0072 0.0167 Total gastropods 0.1145 (b) 0.1068 (b) 0.1911(a) 0.1383 (ab) 0.0375 (b) 0.1176 Decapods (Crabs) 1 Ocypodidae 0.0584 0.0147 0.0640 0.1446 0.0771 0.0718 2 Leucociidae 0.0000 0.0571 0.0213 0.0579 0.0675 0.0408 Total crabs 0.0584 0.0718 0.0853 0.2025 0.1446 0.1126 Worms 1 Nereididae 0.0224 (d) 0.0673 (c) 0.0494 (c) 0.1020 (b) 0.1723 (a) 0.0827 2 Maldanidae 0.0013 (c) 0.0269 (ab) 0.0269 (a) 0.0062 (c) 0.0121 (b) 0.0147 3 Lumbrineridae 0.0180 (b) 0.0629 (a) 0.0269 (b) 0.0594 (a) 0.0474 (a) 0.0429 4 Capitellidae 0.0011 0.0028 0.0180 0.0029 0.0099 0.0069 5 Sternaspidae 0.0016 0.4169 0.0314 0.0253 0.0158 0.0982 6 Unidentified worms 0.0131 0.0196 0.0135 0.0018 0.0053 0.0106 Total worms 0.0575 (b) 0.5964 (a) 0.1661 (b) 0.1976 (b) 0.2628 (b) 0.2561

Total

macrobenthic animals 1.0609 1.4408 8.9097 5.2208 1.4612 3.62

Table 4b: Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna in different months at Sembilang peninsula from March to August 2004. Different letters (in brackets) after biomass values in the same row (species) indicate

significant differences (p<0.05)

Page 242: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

235

Intelligent traffic light system for AMJ highway

Nur Ilyana Anwar Apandi(1), Puteri Nurul Fareha M. Ahmad Mokhtar(2), Nur Hazahsha Shamsudin and Anis Niza Ramani(3) and Mohd Safirin Karis

(1)Mathematics Lecturer, Faculty of Electrical Engineering, Universiti Teknikal Malaysia Melaka (UTeM), Melaka, Malaysia.

Email: [email protected] (2) Department of Control, Instrumentation & Automation, Faculty of Electrical Engineering

Email: [email protected] (3) Department of Industrial Power, Faculty of Electrical Engineering, UTeM, Malaysia.

Email: [email protected],[email protected], [email protected] ABSTRACT The fuzzy logic system to reduce the long term average waiting times is discussed. A fuzzy logic control scheme to regulate the flow of heavy morning traffic approaching a set of three intersections along (Alor Gajah-Melaka-Jasin) AMJ highway is presented. The signal timing parameters, green phase splits and offset are adjusted based on the actual traffic approaching each intersection. An adaptive Fuzzy Logic Traffic Controller is used to adjust the green phase splits approaches of each traffic signal. This system uses a fuzzy rule based system for its decision making to coordinate all three intersections simultaneously. This system provides the controller with traffic densities in the lanes and allows a better assessment of changing traffic patterns. As the traffic distributions fluctuate, the fuzzy controller can change the signal light accordingly. KEYWORDS Intelligent traffic light system; Fuzzy logic control system; Multiplan system. I.INTRODUCTION The traffic light system at AMJ (Alor Gajah – Melaka Tengah – Jasin) highway road consist of many intersections and each intersection have their own traffic light which certainly busy during peak hour. During this hour, there is increase number of vehicles which come from different destination will pass through the sequence of traffic lights which this situation may caused the traffic problem when there is slightly delay of traffic green light changing. This paper consider the road of AMJ highway from the junction of Jalan Duyung-Ayer Keroh until Jalan Melaka Sentral where main destination from rural areas to the city located[1]. Traffic light is used to regulate and optimize the flow of converging traffic. The growing

numbers of road user lead to increasing of traveling times. Traffic in a city is very much affected by traffic light controllers. Nowadays, the problem of traffic light system that most occur is the long waiting time for vehicles standing at certain traffic light before passing to another intersection especially during peak hour. Many of the lights seem to be set for longer red phases causing more vehicles to be at a standstill for unnecessary long periods of time. This problem caused the higher level efficiency usage of time and fuel to the driver. There are many conventional controllers for traffic light system that has been discussed by previous research project [2]-[5] to minimize the waiting times at traffic light. Based on Marco Wiering et al [2, 3] which utilized the multi-agent Reinforcement Learning (RL) to train traffic signal controller, the goal is to minimize the overall waiting time of cars in a city. Fuzzy logic has been introduced and successfully applied to a wide range of automatic control tasks. The fuzzy controller changes the cycle time of the lights depending upon the observed accumulation of cars behind the green and red lights and the current cycle time. It allows the implementation of real-life rules similar to the way humans would think [4]. The implementation of an intelligent traffic lights control system using fuzzy logic technology has the capability of mimicking human intelligence for controlling traffic lights. Fuzzy logic traffic lights control is an alternative to conventional traffic lights control which can be used for a wider array of traffic patterns at an intersection. A fuzzy logic controlled traffic light uses sensors that count cars instead of proximity sensors which only indicate the presence of cars.

Page 243: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

236

A multiagent system is proposed [6] to overcome the lack of interactions between the neighbouring intersections. A set of three intersections is coordinated using local fuzzy logic controllers located at each traffic signal. A multiagent system employing a fuzzy knowledge base is then proposed to coordinate the three intersections based on the traffic conditions at all three intersections. The intelligent traffic light system for AMJ highway is design to run the traffic smoothly during peak hour. At a constant speed, vehicles can easily smooth travelling through a sequence of traffic lights without have to stop and waiting too much time for the next trip. II. SYSTEM DESIGN

The design controller analyzes a vehicle flow through a sequence of traffic lights standing at various distances and then controls these traffic lights time duration while trying to keep the vehicle flow at the maximum speed allowed for the road. Figure 1 shows the situation of traffic flow approaching three intersections. The traffic flow approaching the three intersections is regulated by adjusting the green phase. A fuzzy logic controller is employed to adjust the green phase of the individual traffic signals. The three intersections are coordinated by adjusting their respective green phase. The restriction used in this system consists of a maximum detection of 100 vehicles which is calculated based on the case study that has been made.

Figure 1: Traffic flow approaching three intersections

Based on Figure 1, the system design

will allow vehicles to flow from A to D at three intersection running smoothly. A fuzzy logic traffic controller uses counter sensors that count the number of vehicles and load detectors that detect the weight of vehicles stand on it for each lane. The load detector is placed underground of the road which is behind each traffic light. It will detect the

weight of vehicles at queue area and the extension of green light will depend on their weights. The sensors which are located behind the load detector counts the number of vehicles coming to the intersection.

This research is design to overcome traffic flow problem during peak hour situation, which the ideas is the load detector will detects a heavy traffic load and the sensor at arrival area count the number of vehicles crossing it, the longer extension of green light will be adjust to make sure that the vehicles travelling smoothly through traffic light A continuosly. III. TRAFFIC FUZZY CONTROLLER

A. Design Fuzzy Logic Control There are a few steps in design fuzzy logic controller. The input and output of the system must be identified before designing the fuzzy logic controller. List of input and output for the fuzzy logic controller is shown in Table 1.

Table 1. Input and output for the controller No Input Output 1 Numbers of queue vehicles Extension of

green light 2 Numbers of arrival vehicles 3 Speed of arrival vehicles

B. Fuzzy Inference System Editor The Fuzzy Inference System Editor or FIS Editor described the trafficlight where The Mamdani method is used [7]. The Mamdani method or max-min operation is used in the processing stage or inference engine stage. While, Centroid method is used for the defuzzification stage. These traffic lights system is using three fuzzy input variables and one fuzzy output variable. The inputs which are the quantities of traffic on the queue side, the quantity of traffic on the arrival side and the speed of vehicle needed to flow approaching the three intersections. The output fuzzy variable is the extension of time needed for the green light. B. Membership Function Editor In constructing the membership function, the input and output variables have to be quantized into several modules or fuzzy subsets. According to the traffic lights control system, four membership functions for input 1 (queue) which are many (MY), medium (MD), small (SM), few (FW), four membership functions for input 2 (arrival) which are many

Page 244: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

237

(MY), medium (MD), small (SM), few (FW), three membership functions for input 3 (speed) which are fast(FS), constant (CNT), slow(SL) and three membership functions for output (extension of green light) which are long (LO), medium (MD), short (SH) are needed to quantized each of the fuzzy variables.

(a) queue

(b)arrival

(c) speed

(d) extension of green light

Figure 2. The Membership function for input fuzzy sets: (a) the number of queuing vehicles (b)the number of arrival vehicles (c)the speed of arrival vehicles and

(d)extension of green light. The Membership Function Editor as in Figure 2 shows that the queue and arrival input presenting the numbers of vehicle. All of the variables is using triangular shape membership function or trimf. As shown in Figure 2, the range of queue side is between 0 to 70 vehicles, while the range of arrival side is between 0 to 30 vehicles where the speed of vehicle is in the range 0 to 80km/h. The output fuzzy variable shown in Figure 2 is the extension of green light with the range between 0 to 60 seconds. The range

of numbers are take based on the case study that has been done. B. Rule Editor This fuzzy rule base serves for insertion of new fuzzy rule used in this design as in following: If (queue is MD) or (arrival is MY) or (speed

is CNT) then (extension_of_green_light is LO)

If (queue is FW) or (arrival is MY) or (speed is FS) then (extension_of_green_light is SH)

If (queue is SM) or (arrival is SM) or (speed is CNT) then (extension_of_green_light is SH)

If (queue is FW) or (arrival is FW) or (speed is FS) then (extension_of_green_light is SH)

If (queue is MY) or (arrival is MD) or (speed is CNT) then (extension_of_green_light is LO)

If (queue is SM) or (arrival is MD) or (speed is FS) then (extension_of_green_light is MD)

If (queue is MY) or (arrival is MY) or (speed is SL) then (extension_of_green_light is LO)

If (queue is FW) or (arrival is FW) or (speed is CNT) then (extension_of_green_light is SH)

IV. RESULTS & DISCUSSIONS Based on the case study that has been made, the maximum vehicles flow smoothly through an intersection during peak hour are about 100 vehicles. Table 2 below shows the data of extension of green light for current traffic situation.

Table 2: Case study data for current traffic situation Number of vehicle

passing the intersection

Extension of Green Light

(seconds) 100 60 80 48 60 39 30 30

Based on the data, the number of vehicles passing the intersection did not separate to queue and arrival sides. The amounts of vehicles are calculated through the video that has been taken at the study area. Real case studies as shown in Table 2 found that 60 seconds of extension green light for 100 vehicles travel throung the intersection. The comparison of extension of green light between two situations of speed is also carried

Page 245: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

238

out. Table 3 shows the extension of green light when the fuzzy logic controller system is applied. The data is taken based on the constant and various speed conditions.

Table 3: Extension of green light by using fuzzy logic

controller system Input Output

Queue (No. of

vehicles)

Arrival (No. of

vehicles)

Constant Speed* (km/h)

Various Speed (km/h)

Extension of Green Light*

(seconds)

Extension of Green

Light (seconds)

70 30 60 20 41.1 53.6 70 0 60 20 36.1 36.1 30 30 60 40 34.3 3520 30 60 70 30.5 32.1 50 10 60 50 32.1 36.1 40 20 60 50 36.5 34 0 30 60 80 30.1 19.9 70 10 60 50 37.5 36.7 50 20 60 30 38.2 38.4

Result shown as in Table 5.2, by using fuzzy logic controller system, for ‘many’ condition at constant speed of 60km/h, the green light extension let 41.1 seconds only. It can reduces 31.5% of waiting time. While, for ‘many’ condition with speed of 20km/h, the extension of green light give 53.6 seconds which reduce 10.7% of waiting times. For ‘medium’ condition, with consideration up to 60 vehicles travel through the intersection, 32.1 seconds and 36.1 seconds extension of green light. It reduces 46.5% and 39.8% of waiting time for constant and various speeds condition respectively. As for current case study, only 39 seconds is needed for 60 vehicles to travel through the intersection. The extension of green light is depending on the changes of various vehicle speeds and the number of vehicles coming on the arrival side. V. CONCLUSIONS Fuzzy logic to control the traffic lights is presented to reduce the long term average waiting times. The fuzzy logic traffic lights controller performed better than the Multiplan system due to its flexibility. The flexibility involves the number of vehicles sensed at the incoming junction and the extension of the green light. The traffic flowing through a set of three intersections is optimized by adjusting the signal timing parameters, extension of green

light. The adjustments are made automatically in response to the traffic situation. Next, a simulation was carried out to compare the performance of the fuzzy logic controller with Multiplan system. Our extension analysis shows that the flow densities of the fuzzy logic controller system is better than Multiplan system because of the range waiting time and moving time are shorter and do help the traffic consistent. Based on the result, the fuzzy logic controller system proposed will improve AMJ highway flow especially during peak hour. It will make the road users to have a smooth drive and also can avoid traffic problem. The results observed that the fuzzy logic controller system provides better performance in terms of total waiting time as well as total moving time. Less waiting time not only reduce the fuel consumption but also reduce air and noise pollution. ACKNOWLEDGEMENTS The authors wish to express their thanks to the Universiti Teknikal Malaysia Melaka (UTeM) under UTeM PJP Grant (PJP/2010/FKE (11) S33) REFERENCES [1]. http://maps.google.com.my/maps [2]. Marco Wiering (2003), Intelligent Traffic

Light Control, ERCIM News No. 53, April

[3]. M. Wiering, J. Vreeken, J. V. Veenen and A. Koopman, (2004) Simulation and Optimization of Traffic in a City. IEEE Intelligent Vehicles Symposium, University of Parma, pp. 453-458.

[4]. K. K. Tan, M. Khalid and R. Yusof, (1996) Intelligent Traffic Light Control by Fuzzy Logic. Malaysian Journal of Computer Science, Vol. 9-2, pp. 29-35.

[5]. L. F. Pedraza, C. A. Hernandez and O. Salcedo,( 2008) Intelligent Model Traffic Light for the City of Bogota. Electronics, Robotics and Automotive Mechanics Conference, pp. 354-359.

[6]. M. Masoud. (2006), Multi-agents System for Intelligent Control of Traffic Signals. Int Conference on Computational Intelligence of Modeling Control and Automation.

[7]. E.H. Hamdani (1974), Application of fuzzy algorithm for control of simple dynamic plant, Proc. IEEE 121 12

Page 246: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

239

Goodness of Fit Test for Gumbel Distribution Based on Kullback-Leibler Information using Several Different Estimators

S. A. Al-Subha , K. Ibrahimb , M. T. Alodatc , A. A. Jemaind,

a,b,dSchool of Mathematical Sciences, Universiti Kebangsaan Malaysia, Selangor, Malaysia cDepartment of Statistics, Yarmouk University, Irbid, Jordan

Email: [email protected] , [email protected] , [email protected] , [email protected]

ABSTRACT In this paper, our objective is to test the statistical hypothesis : ( ) ( ) for all o oH F x F x x

1against : ( ) ( )oH F x F x for some ,x

where ( )oF x is a known distribution function. In this study, a goodness of fit test statistics for Gumbel distribution based on Kullback-Leibler information is studied. The Gumbel parameters are estimated by using several methods of estimation such as maximum likelihood, order statistics, moments, L-moments and LQ-moments. The critical value based on the statistics which involves the Kullback-Leibler information under the assumption that oH is true is computed using Monte Carlo simulations.

The performance of the test under simple random sampling is investigated. Ten different distributions are considered under the alternative hypothesis. Based on Monte Carlo simulations, for all the distributions considered, it is found that the test statistics based on estimators found by moment and L-moment methods have the highest power, except for Laplace, Student-T (4) and Cauchy distribution. KEYWORDS Goodness of fit test; Kullback-Leibler Information; Entropy;Gumbel distribution; maximum likelihood; order statistics; moments; L-moments. I. INTRODUCTION There are many areas of application of Gumbel distribution such as environmental sciences, system reliability and hydrology. In hydrology, for example, the Gumbel distribution may be used to represent the distribution of the minimum level of a river in a particular year based on minimum values for the past few years. It is useful for predicting the occurrence of that an extreme earthquake, flood or other natural disaster. The potential applicability of the Gumbel distribution to represent the distribution of minima relates to extreme value theory which indicates that it is likely to be useful if the distribution of the

underlying sample data is of the normal or exponential type. Many studies have been carried out to study goodness of fit tests by using of Kullback-Leibler information. Kinnison (7) tested the Gumbel distribution using a correlation coefficient type statistic. Arizono and Ohata (2) proposed a test of normality based on an estimate of the Kullback-Leibler information. Song (11) presented a general methodology for developing asymptotically distribution-free goodness of fit tests based on the Kullback-Leibler information. Also, he has shown that the tests are omnibus within an extremely large class of nonparametric global alternatives and to have good local power. Ibrahim et al. (1) found that the goodness of fit test based on Kullback-Leibler information supports the results which indicate that the chi-square test is most powerful under RSS than SRS for some selected order statistics.

In this paper, we introduce a goodness of fit test for Gumbel distribution which is based on the Kullback-Leibler information. We estimate the Gumbel parameters by using several methods of estimation such as maximum likelihood, order statistics, moments and L-moments. According to Hosking (5), L-moments have the theoretical advantages over the conventional moments of being able to characterize a wider range of distributions and, when estimated from a sample, the method is more robust to the presence of outliers in the data. Also, the parameter estimates obtained from L-moments are sometimes more accurate in small samples than the maximum likelihood estimates. We compute the percentage values and the power based on the statistics which involves the Kullback-Leibler information using Monte Carlo simulations.

This paper is organized as follows. In

Section 2, we define the test statistic. We define the estimators of Gumbel distribution in

Page 247: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

240

Section 3. In Section 4, we define two algorithms to calculate the percentage points and the power function of the test statistic under an alternative distribution. In Section 5, a simulation study is conducted to study the power of the test statistic and we state our conclusions in Section 6. II. TEST STATISTICS Let 1 2, ,..., nX X X be a random sample from the distribution function ( )F x with quantile function 1( ) ( )Q u F u and let

1: 2: : ...n n n nX X X denote the corresponding order statistics. We are interested in testing the hypothesis

1

: ( ) ( ) for all vs. : ( ) ( )

o o

o

H F x F x x

H F x F x

for some ,x where ( )oF x is a Gumbel distribution function of the following form

( ; , ) exp exp ,o

xF x

(1) and its density function is

1( ; , ) exp exp ,o

x xf x

(2) where is a location parameter, is a scale parameter, , ( , ), and 0.x

We employ the Kullback-Leibler information which is given by

00

( )( , ) ( )log .( ; , )f x

I f f f x dxf x

(3) The quantity 0( , )I f f describes the amount of ‘Information’ lost for approximating ( )f x using 0 ( ).f x The larger value of 0( , )I f f indicates the greater disparity between ( )f x and 0 ( ).f x It known that 0( , ) 0I f f if

and only if 0( ) ( )f x f x for all 0.x Hence the test can be designed as follows. Reject 1 oH vs H if 0( , )I f f is large. Vasicek (12) and Song (11) find that

0

0

( , ) ( )log ( )

( )log ( ; , )

I f f f x f x dx

f x f x dx

: :1

01

1 log2

1 ˆˆ log ( ; , ) ,

n

i m n i m ni

n

i

nx x

n m

f xn

(4)

where m , called window size, is a positive integer : 1:( / 2), for 1 i n nm n x x i and

: : for .i n n nx x i n For the Gumbel

distribution, 0( , )I f f can be estimated,

denoted as ,mnI and be given by

: :1

1

1 log2

ˆ1ˆ log + exp . ˆ

n

mn i m n i m ni

ni

i

nI X X

n m

x

n

(5) Many estimators for and are considered. The estimators derived are maximum likelihood, order statistics, moments and L-moments. The purpose is to find the estimator which is able to give better power. III. ESTIMATORS OF ,

We will introduce four different types of estimators for and , which are

, , and .mle me os lm i)Maximum Likelihood Estimator ( )mle : We denote the of ,mle as

ˆˆ, , respectively. Let 1 2, ,..., nX X X be a random sample from (2). The log-likelihood function is given by

1

1

( , ) log

exp .

ni

i

ni

i

xl n

x

(6)

Page 248: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

241

After taking the derivatives with respect to and and equating to 0, we obtain the

equations given as

1

ˆ ˆˆ and log .n

i ii

x x w z

(7) where

1

1exp , and .ˆn

i ii i i

i

x zz z z w

n nz

ii)Method of Moment Estimator me : The mean and variance for Gumbel distribution are given by

22 2 and .

6 (8)

The moment estimators of the two parameters are

6ˆ s

and ˆˆ x (9)

where s, x are the sample standard deviation and mean respectively, and

0.57721566 is Euler's constant. iii)Order Statistics Estimator ( )os : The thp quantile for the Gumbel distribution is

1( ; , ) ( ) log log , 0 1Q p F p p

p

(10) By taking 0.25 and 0.75,p the estimators of , are

1 1

1 1

1ˆ 0.3266 (0.75) 1.2459 (0.25)1.57251 1 (0.75) (0.25)5 5

F F

F F

(11) and

1 11ˆ (0.75) (0.25) .1.5725

F F

(12)

iv)L-moment Estimator ( )lm :

L-moment r as explained in the Hosking (5), is

1

: 0

11 1 , r 1,2,...r

j

r r j rj

r

jr

(13) where

: : ( ) .r n r nxf x dx

(14) Using :r n can be shown that

2:2

1:2

1.2704 ,0.1159 ,

and 1:1= . The L-moment estimators of , are given by

2:2 1:22:2 1:2

ˆ ˆ1ˆ ˆ ˆ ,2 log 2 1.3863

(15) respectively. Given that

1

1:2 :1

2ˆ ,1

n

i n

i

n i xn n

1

2:2 1:1

2ˆ ,1

n

i n

i

i xn n

1:1ˆ =x, (16) IV. ALGORORITHM FOR POWER COMPARISON Consider 0.05 , random samples of size

12, 18, 24, 36,n and window of size 1, 2,3,4.m Let 1 2, ,..., nX X X be a

random sample from Gumbel with 0, 1, ( ;0,1).oF x We estimate the

parameters and from the sample by maximum likelihood (7), Method of Moment Estimator (9), Order Statistics Estimator (11) and (12), and L-moment Estimator (15) and (16). Calculate the test statistics mnT I using equation (2.5). We repeat the previous steps 40, 000 times to get 1 40,000,..., .T T

Determine the percentage point d of T

Page 249: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

242

which is given by the (1 )100th quantile of the distribution of 1 40,000,..., .T T To calculate the power of T

at different alternative distributions: Let 1 2, ,..., nX X X be a random sample from

,H a distribution under 1.H We estimate the parameters and from the sample by maximum likelihood method (7), Method of Moment Estimator (9), Order Statistics Estimator (11) and L-moment Estimators (15) and (16). Calculate the test statistic mnT I in formulas (5). We repeat the previous steps 40, 000 times to get 1 40,000,..., .T T To calculate the Power, determine

40,000

1

1( ) ( )40,000 t

t

T H I T d

, where

(.)I stands for indicator function.

V. SIMULATION RESULTS The power of each test is approximated based on a Monte Carlo simulation of 40,000 iterations according to the algorithm of Section 4. We compare the efficiency of the tests for different sample sizes:

12, 18, 24, 36,n different window sizes: 1, 2,3,4 andm different alternative

distributions: (0, 1),Normal (0, .7),Logistic (0, 1),Laplace (12),StudentT (4),StudentT

(0, 1),Cauchy (1),Exponential ( (1.5), 2),Weibull (2, .5),Weibull and

( 0.2, .4).Lognormal The Simulation results are presented in the Tables (1) and (2).

Table 1. Percentage points for the test statistics mnI for different sample sizes 12, 18, 24, 36,n window

sizes 1, 2,3, 4 andm 0.05. n m mle me os lm n mle me os lm 12 1 .831 .820 1.091 .810 24 .571 .672 .706 .580 2 .614 .629 .889 .615 .389 .421 .531 .400 3 .561 .613 .882 .447 .352 .385 .496 .368 4 .566 .623 .881 .596 .346 .386 .585 .364 18 1 .655 .679 .850 .659 36 .487 .512 .567 .497 2 .460 .495 .692 .475 .318 .339 .403 .327 3 .422 .467 .660 .449 .269 .295 .362 .283 4 .430 .472 .669 .451 .259 .284 .347 .270

Table 2. Power estimates of the mnI statistics under different alternative distributions with different sample

sizes 12, 18, 24, 36,n window sizes 1, 2,3, 4 andm 0.05.

(0, 1)Normal (0, .7)Logistic (0, 1)Laplace n m mle me os lm mle me os lm mle me os lm 12 1 .098 .113 .114 .114 .083 .057 .174 .146 .122 .218 .282 .197 2 .120 .163 .125 .142 .125 .196 .179 .174 .156 .260 .304 .244 3 .135 .183 .132 .164 .149 .214 .184 .205 .175 .277 .561 .381 4 .164 .218 .133 .201 .190 .245 .196 .230 .212 .304 .313 .283 18 1 .120 .188 .177 .161 .126 .058 .273 .218 .196 .333 .449 .330 2 .169 .237 .179 .208 .180 .293 .279 .271 .250 .389 .472 .365 3 .173 .268 .188 .229 .194 .314 .296 .284 .251 .397 .474 .385 4 .183 .289 .198 .255 .225 .333 .296 .302 .247 .400 .477 .378 24 1 .134 .255 .183 .211 .157 .063 .309 .291 .284 .436 .531 .432 2 .189 .317 .202 .265 .231 .515 .331 .345 .357 .484 .557 .485 3 .232 .354 .211 .308 .248 .407 .343 .371 .361 .500 .567 .490

Page 250: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

243

4 .237 .364 .224 .325 .260 .416 .350 .388 .346 .492 .565 .490 36 1 .182 .355 .225 .296 .247 .070 .427 .406 .431 .595 .702 .586 2 .254 .446 .288 .391 .341 .533 .453 .482 .525 .657 .723 .651 3 .319 .509 .301 .425 .376 .547 .467 .520 .565 .665 .729 .673 4 .348 .512 .316 .459 .408 .563 .485 .535 .548 .669 .732 .663 Table 2. (Cont.) (12)StudentT (4)StudentT (0, 1)Cauchy n m mle me os lm mle me os lm mle me os lm 12 1 .095 .139 .147 .125 .102 .197 .241 .184 .467 .580 .671 .563 2 .123 .181 .157 .161 .128 .241 .249 .226 .498 .584 .687 .575 3 .145 .207 .154 .189 .161 .263 .262 .248 .468 .579 .684 .559 4 .181 .135 .171 .218 .205 .281 .252 .272 .452 .549 .683 .537 18 1 .109 .220 .232 .192 .150 .310 .363 .287 .679 .755 .842 .742 2 .160 .269 .230 .248 .208 .351 .376 .337 .711 .776 .854 .785 3 .197 .295 .245 .265 .251 .367 .381 .343 .690 .751 .850 .733 4 .210 .320 .251 .285 .256 .368 .378 .352 .616 .709 .849 .686 24 1 .146 .291 .260 .260 .201 .389 .429 .368 .808 .854 .924 .849 2 .213 .357 .274 .314 .281 .435 .452 .419 .849 .875 .931 .870 3 .238 .386 .287 .336 .325 .463 .456 .434 .836 .863 .923 .856 4 .254 .401 .300 .385 .325 .464 .463 .436 .793 .841 .925 .686 36 1 .220 .409 .356 .357 .310 .523 .574 .519 .929 .949 .982 .949 2 .311 .494 .392 .434 .398 .591 .601 .576 .955 .965 .984 .961 3 .355 .535 .406 .483 .470 .605 .605 .589 .959 .962 .985 .960 4 .372 .552 .404 .492 .482 .602 .603 .589 .794 .955 .984 .953

(1)Exponential ( (1.5), 2)Weibull (2, .5)Weibull n m mle me os lm mle me os lm mle me os lm 12 1 .133 .177 .073 .171 .223 .278 .120 .263 .120 .071 .042 .058 2 .168 .219 .065 .218 .318 .323 .124 .332 .062 .061 .044 .062 3 .198 .190 .061 .200 .292 .301 .155 .308 .065 .058 .041 .062 4 .141 .137 .039 .151 .186 .223 .064 .243 .065 .061 .042 .066 18 1 .246 .270 .098 .264 .382 .424 .166 .410 .167 .071 .045 .063 2 .322 .327 .103 .359 .500 .518 .189 .530 .069 .063 .047 .066 3 .354 .353 .097 .371 .519 .528 .390 .530 .071 .068 .044 .071 4 .310 .309 .082 .327 .445 .482 .136 .494 .077 .070 .050 .069 24 1 .324 .365 .198 .363 .491 .557 .333 .553 .198 .065 .049 .066 2 .471 .463 .217 .483 .649 .675 .412 .686 .077 .073 .052 .073 3 .485 .481 .218 .503 .675 .697 .475 .719 .087 .075 .053 .080 4 .470 .475 .216 .488 .638 .674 .394 .690 .087 .078 .047 .085 36 1 .468 .538 .364 .520 .700 .756 .584 .733 .312 .070 .051 .068 2 .659 .673 .452 .692 .845 .871 .672 .866 .089 .081 .053 .085 3 .710 .721 .487 .737 .877 .897 .717 .905 .102 .089 .064 .091 4 .698 .736 .489 .735 .885 .901 .701 .903 .104 .103 .059 .101

Page 251: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

244

Table 2. (Cont.)

( 0.2, .4)Lognormal n m mle me os lm 12 1 .085 .091 .051 .089 2 .101 .100 .056 .089 3 .096 .085 .047 .091 4 .084 .067 .039 .067 18 1 .099 .125 .045 .104 2 .128 .146 .045 .137 3 .127 .189 .041 .135 4 .106 .118 .038 .117 1 .108 .157 .070 .125 2 .164 .187 .070 .173 3 .170 .189 .066 .181 4 .162 .179 .068 .162 1 .144 .210 .098 .167 2 .210 .265 .100 .227 3 .226 .269 .106 .258 4 .215 .278 .093 .256

From the above tables, we make the following remarks: 1. The percentage points for the test

decrease as the sample size n increases.

2. The power increases as the sample size n increases.

3. The power increases as the window size m increases.

4. In the case of moment estimator, the test has the highest power for Normal, Logistic, Student (12)T , and Exponential distributions.

5. In the case of L-Moment estimator, the test has the highest power for

( (1.5), 2)Weibull and lognormal distributions.

6. In the case of Maximum Likelihood estimator, the test has the highest power for (2, .5)Weibull distribution.

7. When the estimates are considered, the test has the lowest power for

(2, .5)Weibull , highest power for Cauchy.

VI. CONCLUSION

An accurate estimation of parameters of the Gumbel distribution in statistical analysis is importance. In this paper, we have introduced a test statistic of goodness of fit for Gumbel distribution based on Kullback-Leibler information measure. We considered ten different distributions under the alternative hypothesis. It is found that the test statistics based on estimators found by moment and order statistic methods have the highest power, except for weibull and Lognormal distributions. In the case of Cauchy, the test is found to have the highest power for all estimators but the test has the lowest power for

(2, .5)Weibull . The theory developed could be extended easily to other distributions. Also, we can apply RSS, extreme RSS and median RSS on this test statistics. ACKNOWLEDGEMENTS The authors would like to thank Universiti Kebangsaan Malaysia for supporting this work under the research grant UKM-ST-06-FRGS0096-2009.

Page 252: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

245

REFERENCES [1]. Ibrahim, K. M.T. Alodat, A.A. Jemain & S.A Al-Subh. (2009). “Chi-square Test for Goodness of Fit Using Ranked Set Sampling,” Tenth Islamic Countries Conference on Statistical Sciences. December 20-23, 2009, Cairo, Egypt.

[2]. Arizono I. and Ohta H. (1989). “A test for Normailty based on Kullback-Leibler Information,” Amer. Statistician. 43, pp 20-22.

[3]. D'Agostine, R., Stephens. M. (1986). “Goodness of fit Techniques,” Marcel Dekker Inc., New York

[4]. David, H.A. and H.N. Nagaraja (2003). ”Order Statistics,” 3rd Edn., Wiley, New Jersey, ISBN: 0-471-38926-9.

[5]. Hosking, J. R. M. (1990). “L-moments: Analysis and estimation of distribution using linear combinations of order statistics”, J. Roy. Statist. Se. B. 52, pp 105-124.

[6]. Gumbel, E.J. (1958). Statistics of Extremes. Columbia University Press, New York, ISBN: 0486436047, pp 377.

[7]. Kinnison R. (1989). “Correlation coefficient goodness of fit test for the extreme value distribution,” Amer. Statistician. 43, pp 98-100.

[8]. Kullback, S. (1959). “Information Theory and Statistics,” New York, Wiley.

[9]. Kullback, S., Libler R. A. (1951).” On information and sufficiency,” Ann. Mathemat. Statist. 4, pp 49-70.

[10]. Paulino P. R., Humberto V. H. and Jose V. A. (2009). “A goodness-of-fit test for the gumbel distribution based on Kullback-Leibler Information,” Comm. In statistics- Theory and Methods, 38, pp 842-855.

[11]. Song K. S. (2002). Goodness-of-fit tests based on Kullback-Leibler discrimination information. IEEE Trans. Inform. Theor. 48, pp 1103-1117.

[12]. Vasicek O. (1976). “A test for normality based on sample entropy,” J. Roy. Statist. Soc. 38, pp 54-59.

Page 253: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

246

Impact of shrimp pond development on biomass of intertidal macrobenthic fauna: a case study at Sembilang, South Sumatra, Indonesia

Agus Purwoko1, 2, Arwinsyah Arka2 and Wim J. Wolff1

1 Dept. of Benthic Marine Ecology and Evolution, University of Groningen, P.O. Box 14, 9750 AA Haren, The

Netherlands E-mail addresses: [email protected]

2 Dept. of Biology, Sriwijaya University, Palembang, Indonesia

E-mail addresses: [email protected] ABSTRACT The macrobenthic animal biomass of the intertidal area of the Sembilang peninsula of South Sumatra, Indonesia, has been studied in 1996, 2004 to 2006. Monthly (August – October 1996) 25 core samples were taken at each of plot sampling stations, (March – August 2004) 21 core samples were taken at each of six sampling stations, (March – August 2005) 30 core samples were taken at each of six sampling stations, (June and October 2006) 30 core samples were taken at each of six sampling stations. Macrobenthic fauna was identified at the lowest taxonomical level, and counted. Biomass was measured as ash free dry weight (AFDW). Development of shrimp ponds matched the declining biomass of macrozoobenthic significantly (Anova: p < 0.05) for long term (1996 to 2006). Three year of observation (2004-2006) show that sites close to the shrimp ponds (pond sites) significantly had lower biomass of macrobenthic animals compare to biomass value at sites further from the ponds (non-pond sites). Mangrove organic matter probably was not the main source of food for macrobenthic fauna and pond effluent might not responsible for declining macrozoobenthic biomass. It seemed that disturbance of habitat of macrozoobenthos by fisheries coincided with decreasing biomass. Key words: impact, shrimp pond, biomass, macrobenthos, intertidal fauna, Indonesia Sumatra, Sembilang, INTRODUCTION Around 1990 Indonesian mangroves occupied 4.25 million ha which represented about 20 % of the world’s mangroves. The largest part was found at Irian Jaya (2.94 million ha) and 1.31 million ha occurred at more populated areas such as Java, Sumatra, and Kalimantan. South Sumatra counted 195000 ha (Choong et al. 1990) with 77500 ha (Danielsen & Verheugt 1990) in our study area Sembilang National Park.

Annual mangrove litter production at some nearby Indo-Pacific locations averaged 1107 g dry wt m-2 at Hong Kong (Lee 1989, 1989 a), 497 g dry wt m-2 at Sri Lanka (Amarasinghe & Balasubramaniam 1992), and Bunt (1995) found annually on average 1752 g dry wt m-2 at Papua New Guinea. Annual litter production at Sembilang, Indonesia, was 1376 g dry wt m-2 (Soeroyo 1999). It has been demonstrated that benthic animals, mostly crustaceans, feed on this resource (Odum and Heald, 1972, 1975). In mangrove forest crabs (Lee 1989), gastropods and polychaetes (Alongi & Sesakumar 1992) feed on mangrove leaves. Lee (1989) found that crabs consumed 50 % of mangrove leaves. Part of the net canopy production of mangrove forest is exported. Odum and Heald (1972, 1975) suggested that this may amount to about 50% of the production and Alongi and Sesakumar (1992), and Alongi at al. (2004) found an export of 40% and 25% respectively. This exported material may enhance the benthic fauna of adjoining aquatic habitats such as tidal flats. However, in a review Lee (1995) questioned that benthic biomass of the macrofauna had a positive relationship with the availability of detritus originating from mangrove leaves. Indeed, Lee (1999) experimentally found no relationship between mangrove leaves and biomass of macrobenthic animals. Also, Bouillon et al. (2002) and Hsieh et al. (2002) demonstrated with stable isotopes that many benthic animals in mangrove forests derived their food from other sources than mangrove leaves and litter. Conversion mangrove forest to ponds involved cutting mangrove forest, dredging and dumping, and discharging water to estuaries.

Page 254: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

247

Those activities may have impacts to benthic biomasses at tidal flat. Dredging and dumping are important activities which lead to turbidity and enhanced sediment deposition, and growth of bivalves may be impaired by elevated concentrations of suspended matter (Karel 1999). A three-year study on the effects of reclamation activities on the macrobenthic fauna at a Singapore coastal area found that macrobenthos abundance significantly decreased over time close to the reclaimed area but it increased again with distance from this area (Lu et al. 2002).. Kenny and Rees (1994) reported that a dredged site had not fully recovered after 7 months but Lu and Wu (2000) found that the benthic community fully recovered in less than 15 months. Pond effluents lead to reduced water quality by increasing suspended particles (Jackson et al. 2003), total nitrogen and phosphate (Jackson et al. 2004; Lemonnier & Faninoz 2006), salinity, acidity (Cowan et al. 1999) and biochemical oxygen demand (Trott & Alongi 2000; Cowan et al. 1999), dissolved organic matter (Lemonnier & Faninoz 2006), eutrophication (McKinnon et al. 2002; Alongi et al. 2000) and adding antibiotics (Le & Munekage 2004; Le et al. 2005). Phytoplankton may benefit of increasing nutrients. However, it will improve shoot biomass of mangrove seedling (Rajendran & Kathiresan 1996). Two families from Lampung (Southern South Sumatra) Province established 4 ha of shrimp ponds in the Sembilang National Park in 1996. Local government caught the investors, but from 1997 to 2002 the government did not pay attention to Sembilang developments. About 2,000 families established 4,000 ha (traditional) shrimp ponds from 1998 to 2002. They left some mangrove area as a green belt, and their ponds are connected to some small rivers (Fig. 1). Purwoko (1996) described abundance and biomass of macrobenthic fauna at Sembilang before this large-scale pond construction; his site of observation was about 10 km from the first established pond. During and after the pond development at Sembilang peninsula, no study on the impacts of shrimp pond development on the coastal ecosystems was conducted at Sembilang. Our study objective was to determine the positive or negative

impact of pond construction and operation on the macrobenthic fauna of the tidal flats on that site. We did so by comparing the biomass in the pre-pond construction (1996) and after-pond construction period (2004-2006), and by comparing locations close to the shrimp ponds with sites at further distance after pond construction. METHODS Area description The Sembilang peninsula (Fig. 1: 10 59’ to 20 15’ S and 1040 45’ to 1040 53’ E) is located at the Eastern coast of South Sumatra Province, Indonesia, and it is part of the Sembilang National Park. It is influenced by the Musi River, the Musibanyuasin River and some smaller tributaries. Originally, all land was covered by mangrove and swampy forest. The most seaward belt of the mangrove vegetation is still formed by Sonneratia and Avicennia where the soil is sandy; however, where the soil is muddy, Rhizophora occurs. Nypa palms appear where the soil is highly influenced by fresh water. In the mangrove area at the Sembilang peninsula, approximately 200 to 500 meter from the beach, there are 4,000 ha of shrimp ponds. The coastline of the area faces directly Bangka strait and the East China Sea. Two villages are nearby. Sungsang village is at the southern part and Sembilang village at the northern end of the peninsula. Description of sampling stations For our 2004-2006 study we selected 6 sampling stations (Fig. 1) with varied characteristics: 1. Station 1: Tj. (Tanjung) Carat (S: 20 16.324' & E: 1040 55.117' by Gekko 202 Garmin GPS). Located in the estuary of the Musi river; the soil is sandy, and the station is strongly influenced by fresh water. 2. Station 2: S. (Sungai) Bungin (S: 20 14.955' & E: 1040 50.714'). Located in the estuary of the Musibanyuassin river. The surface layer of the sediment is soft and high in organic matter and has a thickness of more than 1 meter. On the sediment young Avicennia occur. 3. Station 3: Solok Buntu (S: 20 11.063' & E: 1040 54.764'). Located near the estuary of the

Page 255: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

248

Musibanyuasin river. The surface layer of the soil is muddy, and reaches 40 cm depth. Mangrove vegetation consists mainly of Avicennia. At sea nearby, there are some pole houses. 4. Station 4: S. Barong Kecil (S: 20 9.872' & E: 1040 54.587'). The site is near the minor estuary of S. Barong Kecil. The soil is muddy and the depth of this layer reaches 50 cm. 5. Station 5: S. Siput (S: 20 5.824' & E: 1040 54.102'). The soil is muddy and the soft layer reaches 40 cm; the adjacent mangrove vegetation is mostly Avicennia; close to the site young trees grow. At this station we made also our 1996 observations. 6. Station 6: S. Dinding (S: 20 1.924' & E: 1040 50.573'). The sediment is muddy and contains many dead shells. The depth of the soft layer reaches 40 cm. The site is facing directly the South China Sea, so it is exposed to high waves. Stations 1, 2 and 6 were further removed from the shrimp ponds; and stations 3, 4, and 5 relatively close to ponds or at least close to estuaries of small rivers that served as an outlet of pond discharges (Fig. 1). So, our after-construction comparison has been made by comparing stations 1, 2 and 6 on the one hand with stations 3, 4 and 5 on the other hand. Sampling procedure In 1996, two plots (100 * 100 meters) were established at station 5. Plot x was situated at the upper half of the shore and plot y was set at the lower shore. Sampling started in August and took place three times; the periods between two samplings were four weeks. There were 25 core samples collected at random guided by random numbers at each plot. Samples were taken with a circular corer. Core diameter was 15 cm and the sampling depth 30 cm. The core samples directly were extracted by 5 mm sieve in the field and the animals were collected from the sieve by hand. Macrobenthic fauna collected was preserved in 70 % alcohol mixed with 3 % of formalin. Sampling activities took place during low tide at Sungsang. The fauna in the samples was identified to the lowest taxonomic level possible (Dharma 1988, 1992), counted and Ash-Free Dry Weight (AFDW) (Winberg

& Duncan 1971) was measured at the laboratory at Palembang. In 2004-2006 each station consisted of 3 parallel line transects each consisting of 7 sampling points in 2004 and 10 sampling points for 2005 and 2006. The distance between the line transects varied from 5 to 10 m; the distance between the sampling points was randomly chosen and varied between 3 and 20 m. The first core sample of the first transect was taken at the lowest water level and the second till the seventh one were at further distance towards the mangrove. Circular core diameter and sampling depth were the same as for the 1996 core samples. At each sampling point 1 core sample was taken. The core samples directly were extracted by double layers of 1 mm sieve in the field and the animals were collected from the sieve by hand. Macrobenthic fauna collected was preserved in 70 % alcohol mixed with 3 % of formalin. Sampling activities took place during low tide at Sungsang in March, May, June, July and August 2004, March, April, May, June, July and August 2005, and June and October 2006. The animals in the samples were identified to the lowest taxonomic level possible (Dharma 1988, 1992); Computer Program: Poly Key), counted and Ash-Free Dry Weight (AFDW) measured like 1996’s samples. In order to compare years, data sets were taken from the same month of each year. In this case, data sets for June in 2004, 2005 and 2006 were comparable. The statistical calculations such as Analysis of Variance (ANOVA), Duncan test and T-test were carried out by the Statistics 7 Program. RESULTS Comparison of pre-construction and post-construction period Table 1 from Purwoko (1996) gives the biomasses found at the two plots at station 5 in 1996, that is in the pre-construction period of ponds at Sembilang peninsula. Higher biomass of macrobenthic fauna found at plots x and y in October 1996 differs significantly from the biomasses in August and September 1996. However, biomass of macrobenthic animals at plot x was not significantly different from those at plot y from August to October 1996 (no letters shown). In 1996, 17 taxa were identified

Page 256: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

249

and the average density at the plots was 690 individuals m-2. Table 2 present the biomass data per station averaged over the period 2004-2006. Generally, total biomasses of macrobenthic fauna at stations 1, 3 and 5 were significantly lower (ANOVA: p<0.05) than those at stations 2, 4 and 6. The same differences occurred for total biomass of bivalves. The highest biomass of polychaetes was found at station 1, differing significantly from the other stations. However, each family of polychaetes had a different pattern of distribution of biomass compared to the total biomass of all polychaetes. Biomass of gastropods at station 1, 3, 4 and 5 was significantly lower than biomass values at stations 2 and 6. Comparison of sites in post-construction period Table 2 gives the biomass data for the period after pond construction. It is, however, based on averages for different months and different numbers of months. To better compare the different years Table 3 compares the biomass values obtained at each station in June of each year. We found in general low biomass values, but with significant differences between years. Also the stations differ significantly from each other (ANOVA: p<0.05; not shown in Table) per year. Table 4 gives biomasses per species averaged over all stations in June of each year. At the Sembilang peninsula, total biomass of macrobenthic fauna was significantly lower in 2005 compared to 2004 and 2006. There were also significant differences among the stations from 2004 to 2006 (Table 3). For example, at station 5, biomass of macrobenthic animals in 2006 was significantly higher than in 2004 and 2005. Bivalves, gastropods, crabs and some polychaetes species were significantly different between 2004 and 2005. Biomasses of bivalves and gastropods reached significantly higher values in 2004, but crabs (Ocypodidae) in 2006. Most polychaetes (Nereididae, Lumbrineridae, Sternaspidae and unidentified worms) showed higher values of biomass in 2006 (Table 4). Generally, in 2004 to 2006 the pond sites had significantly lower biomass of macrobenthic animals compared to non-pond sites. All

species of macrobenthic fauna which had a significant difference had lower biomass at pond sites (Table 5). DISCUSSION Pre-construction compared to post-construction period of ponds at Sembilang peninsula In August and October 1996, biomass of macrobenthic animals at station 5 was significantly higher than in August and October 2004-2006. This biomass of macrobenthic fauna gained mainly from Anadara biomass followed by Tellina remies. Biomasses at plot x and plot y were not significantly different in 1996, meaning that the macrobenthic fauna biomass was not influenced by the time of inundation, and beach steepness (Jiang & Li 1995; Ricciardi & Bourget 1999) Due to the larger sieving mesh in 1996 (5 mm) compared to the sieving mesh applied in 2004 to 2006 (1 mm), the biomass values in 1996 are supposed to have been even higher than the biomasses actually recorded. Small polychaetes apparently passed through the sieve since we did not find any Maldanidae, Capitellidae and Sternaspidae in 1996. These animals approximately contributed average 2 percent to the total biomass value in 2004-2006 (Table 4). In 1996 macrobenthic animals were larger in numbers of taxa and density than in 2004, 2005 and 2006. Species found in 1996 and absent in 2004, 2005 and 2006 were Cerithium sp., Clypeomorus sp., Vexilles sp., and Natica sp. We conclude that macrobenthic animal biomass at Sembilang decreased by nearly an order of magnitude between 1996 before the construction of shrimp ponds and 2004-2006 well after the pond construction. Comparison of stations after-pond construction period A three-year observation (2004 to 2006) showed a significantly lower value of macrobenthic animal biomass at the stations 1, 3 and 5 compared to the stations 2, 4 and 6 (Table 2). This detailed evaluation does not support our hypothesis that the established ponds caused the decrease of the biomass of

Page 257: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

250

macrobenthic fauna. However, based on a comparison of non-pond and pond sites, we could show that biomass of macrobenthic fauna at pond sites on average was significantly lower compared to the biomass value at non-pond sites (Table 5). Also all macrobenthic species which showed significant differences had lower biomass at the pond sites (Table 5). This supports our hypothesis that biomass of macrobenthic animals declined significantly due to pond establishment. Mangrove litter fall were believed to be a source of organic matter for macrobenthic animals both in the mangrove area tidal flat nearby (Odum & Heald 1975; Lui et al. 2002; Bosire et al. 2004). In a review, however, Lee (1995) questioned that benthic biomass had a positive relationship with the availability of detritus originating from mangrove leaves. This is supported by the results of Bouillon et al. (2002) and Hsieh et al. (2002) who demonstrated by means of stable isotopes that most benthic organisms do not feed on mangrove-derived organic matter. This is in line with the conclusion that macrozoobenthic biomass does not depend only on food availability (mangrove litter fall) but also food quality. Mangrove litter contains tannin which is not palatable for most macrozoobenthic species (Alongi & Sesakumar 1992) and Lee (1999) found negative relationship between solube tannin and biomass of macrozoobenthic in sediment. These conclusions affect the credibility of our hypothesis and force us to look into other explanations. Discharge water contains nutrients, organic carbon and suspended solids which might have antibiotics. Options to prevent and reduce those pollutants entering the environment are to provide settling ponds (Jackson et al. 2003; Gautier et al. 2001; Lee 1993; Halide et al. 2003), ponds with vegetation (Sansanayuth et al. 1996; Halide et al. 2003) and mangrove forest as a filter (Nielsen et al. 2003). The latter of these options has been put into practice at Sembilang. However, whenever pond effluent was discharged directly into creeks, by 1 km down-stream and within 1-2 months after discharge ceased, water quality was recovered (Trott & Alongi 2000). Trott et al. (2004) found that discharge of pond waste carbon (C) and nitrogen (N) during shrimp harvest periods did not cause eutrophication further downstream.

Hence, it is unlikely that at Sembilang waste water from shrimp ponds has been responsible for a difference between pond sites and non-pond area. Hence, we consider the possibility that fishing may have influenced the biomass of the intertidal macrobenthic fauna. Fishing at Sembilang aims at shrimp, fish and bivalves. The number of active fishing vessels amounts to about 1970 of which daily 40-60 are active in our tidal flat area (Djamali and Sutomo, 1999). Before 1999, fishermen used gill nets to catch shrimps for export abroad. They hardly harvested shrimps near the intertidal areas (Djamali & Sutomo 1999). After 1999, almost all fishing boats have been equipped with modified mini trawls to catch shrimps (Purwoko, unpubl. obs.). These boats operate at the intertidal areas during high tide. The trawls move over and through the muddy top layer of the sediment, supposedly disturbing the macrozoobenthic habitats. Since shrimp vessels operate at all stations they may have contributed to the general decrease of biomass over time. Comparing June 2004, 2005 and 2006 (Table 3), a significantly lower average biomass was found at Sembilang peninsula in June 2005. Further, stations 1, 4 and 5 had lower biomass of macrobenthic fauna, too. This was probably due to disturbance of the habitat of macrobenthic animals by a fishery for bivalves. Starting in March 2005, fishermen harvested Anadara at the intertidal flats of Sembilang peninsula. At station 2, they collected T. timorensis instead of Anadara. In May 2005, fishermen harvested Anadara at station 4 and in June 2005 they collected Anadara at station 5, after June, they moved to intertidal areas further North (station 6). It is assumed that a month after massive disturbance, biomass value of macrobenthic animal could not have recovered completely, but it could be recovered after 2 months by showing significantly higher biomass of macrobenthic animals at station 3 which was harvested in March and April 2005. At least three fishing boats operated by shoveling the sediment and collecting Anadara every day at high tide. However, no fishing boats operated on station 1. Kenny and Rees (1994) reported that the benthic community at a dredged site had not fully recovered after 7 months and Lu and Wu (2000) supported by

Page 258: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

251

demonstrating that the benthic community fully recovered in less than 15 months. Mesh width of the net of fishermen boat was 2 cm, and they collected Anadara bigger than 2 cm (Djamali 1999). In addition, a group of on average 20 people manually collected Anadara at intertidal areas at low tide. In 1999/2000 (Djamali & Sutomo 1999) fishermen caught 130 ton wet weight of Anadara which is equivalent to about 3 ton afdw at intertidal and subtidal areas; in 2005 we estimate a catch of 32 ton wet weight (0.8 ton afdw) per month from the stations 3 to 6 in the period March to June (Purwoko, unpubl. obs.). This can be compared to the average biomass of Anadara at our stations (0.898 g afdw m-2; Table 2) which implies a standing stock for the Sembilang tidal flats of about 24 tons afdw. Further, Tellina remies also showed no significant difference of biomass value from 2004 to 2006. Field observation showed that T. remies migrated in August 2005. We observed about 15 to 20 individual animals m-2 floating at the water surface during high tide moving North. Massive harvesting of Anadara probably was the triggering of T. remies migration. This behaviour might explain why T. remies showed no significant differences over time. We concluded that converting mangrove to ponds coincided with decreasing biomass of macrobenthic animals from 1996 to 2006, and from 2004 to 2006 the nearby presence of shrimp ponds coincided with low biomass of the macrobenthic fauna. However, loss of mangrove organic matter and discharge of shrimp pond water might not be responsible for decreasing biomass of macrozoobenthic. Habitat disturbance by fisheries was probably the most reasonable explanation for declining macrozoobenthic biomass.. Acknowledgements First AP would like to thank to Sriwijaya University, Palembang, Indonesia for sponsoring my study. AP thanked to Ake and Usup for their assistance during sampling and Ismail for acting as a speedboat driver. AP was also glad that Edi P. assisted me on the measuring the AFDW. Finally, special thanks are addressed to Dr. Harry ten Hove who guided AP to identify the macrobenthic fauna.

REFERENCES Alongi, D. M., D. J. Johnston and T. T. Xuan

(2000). "Carbon and nitrogen budgets in shrimp ponds of extensive mixed shrimp-mangrove forestry farms in the Mekong delta, Vietnam." Aquaculture Research 31(4): 387-399.

Alongi, D. M., A. Sasekumar, V. C. Chong, J. Pfitzner, L. A. Trott, F. Tirendi, P. Dixon and G. J. Brunskill (2004). "Sediment accumulation and organic material flux in a managed mangrove ecosystem: estimates of land-ocean-atmosphere exchange in peninsular Malaysia." Marine Geology Material Exchange Between the Upper Continental Shelf and Mangrove Fringed Coasts with Special Reference to the N. Amazon-Guianas Coast 208(2-4): 383-402.

Alongi, D. M. and A. Sesakumar (1992). Benthic communities. Coastal and estuarine studies: Tropical mangrove ecosystem. A. I. Roberstson and D. M. Alongi. Wachington, DC, American Geophysical Union: 137-171.

Amarasinghe, M. D. and S. Balasubramaniam (1992). "Net primary productivity of two mangrove forest stands on the northwestern coast of Sri Lanka." Hydrobiologia 247(1): 37-47.

Bosire, J. O., F. Dahdouh-Guebas, J. G. Kairo, S. Cannicci and N. Koedam (2004). "Spatial variations in macrobenthic fauna recolonisation in a tropical mangrove bay." Biodiversity and Conservation 13(6): 1059-1074.

Bouillon, S., N. Koedam, A. Raman and F. Dehairs (2002). "Primary producers sustaining macro-invertebrate communities in intertidal mangrove forests." Oecologia 130(3): 441-448.

Bunt, J. S. (1995). "Continental scale patterns in mangrove litter fall." Hydrobiologia 295(1): 135-140.

Choong, E. T., R. S. Wirakusumah and S. S. Achmadi (1990). "Mangrove forest resources in Indonesia." Forest Ecology and Management 33-34: 45-57.

Cowan, V. J., K. Lorenzen and S. J. Funge-Smith (1999). "Impact of culture intensity and monsoon season on water quality in Thai commercial shrimp ponds." Aquaculture Research 30(2): 123-133.

Danielsen, F. and W. Verheugt (1990). Integrating conservation and land-use planning in the coastal region of South

Page 259: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

252

Sumatra. Bogor, Indonesia, Asian Wetland Bureau-PHPA: 172 pp.

Dharma, B. (1988). Siput dan kerang Indonesia (Indonesian Shell). Jakarta., PT. Sarana Graha.: 111 pp.

--- (1992). Siput dan kerang Indonesia (Indonesian Shell II). Wiesbaden, Germany., Verlag Chista Hemmen: 135 pp.

Djamali, A. (1999). Sebaran dan Komposisi Fauna Bentik. Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. K. Romimohtarto, A. Djamali and Soeroyo. Jakarata, Pusat Penelitian dan Pengembangan Oseaneologi-LIPI: 49-54.

Djamali, A. and Sutomo (1999). Sosial Ekonomi Budaya dan Perikanan. Ekosistem Perairan Sungai Sembilang, Musibanyuasin, Sumatera Selatan. K. Romimohtarto, A. Djamali and Soeroyo. Jakarta, Pusat Penelitian dan Pengembangan Oseanologi-LIPI: 67-76.

Gautier, D., J. Amador and N. Federico (2001). "The use of mangrove wetland as a biofilter to treat shrimp pond effluents: preliminary results of an experiment on the Caribbean coast of Colombia." Aquaculture Research 32(10): 787-799.

Halide, H., P. V. Ridd, E. L. Peterson and D. Foster (2003). "Assessing sediment removal capacity of vegetated and non-vegetated settling ponds in prawn farms." Aquacultural Engineering 27(4): 295-314.

Hsieh, H. L., C. P. Chen, Y. G. Chen and H. H. Yang (2002). "Diversity of benthic organic matter flows through polychaetes and crabs in a mangrove estuary: d13C and d34S signals." Marine Ecology Progress Series 227: 145-155.

Jackson, C. J., N. Preston, M. A. Burford and P. J. Thompson (2003). "Managing the development of sustainable shrimp farming in Australia: the role of sedimentation ponds in treatment of farm discharge water." Aquaculture Management of Aquaculture Effluents 226(1-4): 23-34.

Jackson, C. J., N. Preston and P. J. Thompson (2004). "Intake and discharge nutrient loads at three intensive shrimp farms." Aquaculture Reseach 35: 1053-1061.

Jiang, J. X. and R. G. Li (1995). "An ecological study on the Mollusca in mangrove areas in the estuary of the Jiulong River." Hydrobiologia 295(1): 213-220.

Karel, E. (1999). "Ecologycal effects o dumping of dredged semdiments; options for

management." Journal of Coastal Conservation 5: 69-80.

Kenny, A. J. and H. L. Rees (1994). "The effects of marine gravel extraction on the macrobenthos: Early post-dredging recolonization." Marine Pollution Bulletin 28(7): 442-447.

Le, T. X. and Y. Munekage (2004). "Residues of selected antibiotics in water and mud from shrimp ponds in mangrove areas in Viet Nam." Marine Pollution Bulletin 49(11-12): 922-929.

Le, T. X., Y. Munekage and S.-i. Kato (2005). "Antibiotic resistance in bacteria from shrimp farming in mangrove areas." Science of The Total Environment 349(1-3): 95-105.

Lee, S. Y. (1989). "The importance of sesarminae crabs Chiromanthes spp. and inundation frequency on mangrove (Kandelia candel (L.) Druce) leaf litter turnover in a Hong Kong tidal shrimp pond." Journal of Experimental Marine Biology and Ecology 131(1): 23-43.

--- (1989 a). "Litter production and turnover of the mangrove Kandelia candel (L.) druce in a Hong Kong tidal shrimp pond." Estuarine, Coastal and Shelf Science 29(1): 75-87.

--- (1993). "The management of traditional tidal ponds for aquaculture and wildlife conservation in Southeast Asia: Problems and prospects." Biological Conservation 63(2): 113-118.

--- (1995). "Mangrove outwelling: a review." Hydrobiologia 295(1): 203-212.

--- (1999). "The Effect of Mangrove Leaf Litter Enrichment on Macrobenthic Colonization of Defaunated Sandy Substrates." Estuarine, Coastal and Shelf Science 49(5): 703-712.

Lemonnier, H. and S. Faninoz (2006). "Effect of water exchange on effluent and sediment characteristics and on partial nitrogen budget in semi-intensive shrimp ponds in New Caledonia." Aquaculture Reseach 37: 938-948.

Lu, L., P. L. B.Goh and M. L. Chou (2002). "Effects of coastal reclamaation on riverine macrobenthic infauna (Sungei Punggol) in Singapore." Journa of Aquatic Ecosystem Stress and Recovery 9: 127-135.

Lu, L. and R. S. S. Wu (2000). "An experimental study on recolonization and succession of marine macrobenthos in defaunated sediment." Marine Biology 136(2): 291-302.

Page 260: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

253

Lui, T. H., S. Y. Lee and Y. Sadovy (2002). "Macrobenthos of a tidal impoundment at the Mai Po Marshes Nature Reserve, Hong Kong." Hydrobiologia V468(1): 193-211.

McKinnon, A. D., L. A. Trott, D. M. Alongi and A. Davidspm (2002). "Water column production and nutrient characteristics in mangrove creeks receiving shrimp farm effluent." Aquaculture Research 33(1): 55-73.

Nielsen, O. I., E. Kristensen and D. J. Macintosh (2003). "Impact of fiddler crabs (Uca spp.) on rates and pathways of benthic mineralization in deposited mangrove shrimp pond waste." Journal of Experimental Marine Biology and Ecology 289(1): 59-81.

Odum, W. E. and E. J. Heald (1975). Mangrove forests and aquatic productivity. In: A.D. Hasler (ed.) Coupling of Land and Water Systems. A. D. Hasler. Berlin, Springer: 129-136.

Purwoko, A. (1996). The carrying capacity of intertidal area: a feeding habitat of the stop over of migratory birds in the sembilang peninsula, South Sumatra, Indonesia. Environmental Science. Wageningan., Wageningen Agricultural University: 45.

Rajendran, N. and K. Kathiresan (1996). "Effect of effluent from a shrimp pond on shoot biomass of mangrove seedling." Aquaculture Reseach 27: 745-747.

Ricciardi, A. and E. Bourget (1999). "Global patterns of macroinvertebrate biomass in marine intertidal communities." Marine Ecology Progress Series (MEPS) 185: 21 -35.

Sansanayuth, P., A. Phadungchep, S. Ngammontha, S. Ngdngam, P. Sukasem, H. Hoshino and M. S. Ttabucanon (1996). "Shrimp ponf effluent: Pollution problem and treatment by constructed wetlands." Water Science and Technology 34(11): 93-98.

Soeroyo (1999). Struktur, komposisi, zonasi dan produksi serasah mangrove. Ekosistem perairan sungai Sembilang, Musibanyuasin, Sumatera Selatan. K. Romimohtarto, Djamali, A. and Soeroyo. Jakarta, Pusat Penelitian dan Pengembangan Oseanologi-LIPI: 55-66.

Trott, L. A. and D. M. Alongi (2000). "The Impact of Shrimp Pond Effluent on Water Quality and Phytoplankton Biomass in a Tropical Mangrove Estuary." Marine Pollution Bulletin 40(11): 947-951.

Trott, L. A., A. D. McKinnon, D. M. Alongi, A. Davidson and M. A. Burford (2004). "Carbon and nitrogen processes in a mangrove creek

receiving shrimp farm effluent." Estuarine, Coastal and Shelf Science 59(2): 197-207.

Winberg, G. G. and A. Duncan (1971). Methods for the estimation of production of aquatic animals. London and New York, Academic press: 174.

Page 261: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

254

AFDW macrobenthic fauna at

station 5 (S.Siput) in 1996

Treatments August Sept Oct

plot x 16.33 (a) 14.71 (a) 37.74 (b)

plot y 20.44 (a) 15.32 (a) 33.38 (b)

Table 1 Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna at plots x and y of station 5 at Sembilang

peninsula in different months. Different letters (in brackets) after biomass values in the same row indicate significant differences (ANOVA: p<0.05).

WAFDW (gram/m2) Location: Sembilang peninsula in 2004-2006 Bivalves Station 1 Station 2 Station 3 Station 4 Station 5 Station 6

1 Anadara granosa 0.0144 0.0000 0.3704 4.1691 0.0000 0.8413 2 Hecuba scortum 0.0041 0.0358 0.1490 0.0085 0.2251 0.1136 3 Solen sp. 0.0588 (b) 0.0774 (b) 0.0425 (b) 0.0377 (b) 0.0039 (a) 0.0410 (b) 4 Tellina remies 0.0184 (a) 0.0801 (a) 0.2533 (a) 0.5155 (a) 0.7711 (a) 8.5615 (b) 5 Tellina timorensis 0.1112 (a) 4.3512 (b) 0.2700 (a) 0.1933 (a) 0.1908 (a) 0.3260 (a)

Total Bivalves 0.2069 (a) 4.5445 (b) 1.0852 (a) 4.9241 (b) 1.1908 (a) 9.8833 (c) Gastropods

1 Clinthon oualaniensis 0.0117 0.0576 0.0039 0.0250 0.0059 0.0153 2 Littorina melanostoma 0.0000 0.0001 0.0000 0.0001 0.0000 0.0002 3 Nassa serta. 0.0176 0.0956 0.0128 0.0221 0.0164 0.0907 4 Thais buccinea. 0.0037 0.0066 0.0118 0.0032 0.0016 0.0115

Total Gastropods 0.0329 (a) 0.1599 (b) 0.0286 (a) 0.0504 (a) 0.0240 (a) 0.1176 (c) Decapods (Crabs)

1 Ocypodidae 0.0373 (a) 0.0184 (a) 0.1885 (a) 0.0412 (a) 1.5910 (b) 0.0271 (a) 2 Leucociidae 0.0799 0.0174 0.0346 0.0193 0.0189 0.0076 Total decapods 0.1171 (a) 0.0358 (a) 0.2231 (a) 0.0605 (a) 1.6099 (b) 0.0347 (a)

Polychaetes (Worms) 1 Nereididae 1.6800 (b) 0.3009 (a) 0.1455 (a) 0.3151 (a) 0.1716 (a) 0.2186 (a) 2 Maldanidae 0.0039 (a) 0.0445 (b) 0.0069 (a) 0.0158 (a) 0.0176 (a) 0.0301 (a) 3 Lumbrineridae 0.0065 (a) 0.0154 (b) 0.0196 (b) 0.0288 (b) 0.0456 (c) 0.0416 (c) 4 Capitellidae 0.0009 0.0143 0.0114 0.0026 0.0022 0.0036 5 Sternaspidae 0.0041 (a) 0.0394 (a) 0.0839 (b) 0.2346 (b) 0.0344 (a) 0.0469 (a) 6 Unidentified worms 0.0189 (a) 0.0370 (a) 0.0051 (a) 0.0041 (a) 0 (a) 0.2609 (b)

Total Polychaetes 1.7145 (c) 0.4034 (a) 0.2724 (a) 0.6009 (b) 0.2715 (a) 0.6017 (b) Total macrob. Animals 2.0715 (a) 5.1436 (b) 1.6093 (a) 5.6359 (b) 3.0962 (a) 10.6374 (c) Table 2. Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna taxa at six sampling stations at Sembilang

peninsula in 2004 to 2006. Different letters (in brackets) after biomass values in the same row (species of macrobenthic animals) indicate significant differences (ANOVA: p<0.05).

AFDW of macrobenthos fauna at Sembilang peninsula in June from

2004 to 2006 over 6 stations Locations 2004 2005 2006 Station 1 2.23 (a) 3.45(a) 5.34 (b) Station 2 0.62 (a) 1.48 (b) 2.24 (b) Station 3 1.35 (a) 1.90 (b) 1.42 (a) Station 4 47.45 (b) 1.05 (a) 1.08 (a) Station 5 0.87(a) 0.49 (a) 9.95 (b) Station 6 0.94 (a) 2.82 (b) 2.83 (b)

Table 3 Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna in different stations at Sembilang peninsula

in June 2004 to 2006. Different letters (in brackets) after biomass values in the same row indicate significant differences (ANOVA: p<0.05) between years.

WAFDW (gram/m2) Sembilang peninsula in June …

Page 262: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

255

Table 4 Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna taxa at Sembilang peninsula in June 2004 to 2006. Different letters (in brackets) after biomass values in the same row indicate significant differences (ANOVA: p<0.05).

Location: Sembilang peninsula in 2004-2006

WAFDW (gram/m2)

Bivalves Non-pond

sites Pond sites 1 Anadara granosa 1.8032 1.5132 2 Hecuba scortum 0.1920 0.1275 3 Solen sp. 0.1325 ** 0.0280 4 Tellina remies 3.4328 ** 0.5133 5 Tellina timorensis 3.3016 ** 0.2180 Total Bivalves 8.8621 ** 2.4001 Gastropods 1 Clinthon oualaniensis 0.0628 0.0116 2 Littorina melanostoma 0.0002 0.0000 3 Nassa serta. 0.1228 ** 0.0171 4 Thais buccinea 0.0163 0.0055 Total Gastropods 0.2020 ** 0.0343 Decapods (Crabs) 1 Ocypodidae 0.6530 ** 0.6069 2 Leucociidae 0.0917 ** 0.0242 Total Decapods 0.7447 ** 0.6312 Worms 1 Nereididae 1.6042 ** 0.2107 2 Maldanidae 0.0558 ** 0.0134 3 Lumbrineridae 0.0598 0.0314 4 Capitellidae 0.0168 0.0054 5 Sternaspidae 0.1623 ** 0.1176 6 Unidentified worms 0.1273 ** 0.0031 Total Worms 1.9941 ** 0.3816 Total macrob. Animals 11.8030 ** 3.4471

Bivalves 2004 2005 2006 1 Anadara granosa 7.7037 0.1163 0.0000 2 Hecuba scortum 0.0269 0.1131 0.0007 3 Solen sp. 0.1278 (a) 0.1854 (b) 0.0725 (a) 4 Tellina remies 0.4092 (b) 0.3645 (a) 0.0283 (a) 5 Tellina timorensis 0.1996 0.1688 0.1012 Total Bivalves 8.4672 (b) 0.9481 (a) 0.2027 (a) Gastropods

1 Clinthon oualaniensis 0.0673 0.0000 0.0000 2 Littorina melanostoma 0.0009 0.0000 0.0000 3 Nassa serta. 0.0853 (b) 0.0019 (a) 0.0134 (ab) 4 Thais buccinea. 0.0375 0.0000 0.0000 Total Gastropods 0.1911 (b) 0.0019 (a) 0.0134 (a) Decapods (Crabs)

1 Ocypodidae 0.0640 (a) 0.2985 (a) 1.6395 (b) 2 Leucociidae 0.0000 0.0000 0.0000 Total Decapods 0.0640 (a) 0.2985 (a) 1.6395 (b) Worms

1 Nereididae 0.0494 (a) 0.5594 (a) 1.4588 (b) 2 Maldanidae 0.0269 0.0094 0.0126 3 Lumbrineridae 0.0269 (a) 0.0251 (a) 0.0430 (b) 4 Capitellidae 0.0180 0.0003 0.0000 5 Sternaspidae 0.0314 (a) 0.0220 (a) 0.1571 (b) 6 Unidentified worms 0.0135 (ab) 0.0011 (a) 0.2828 (b) Total Worms 0.1661 0.6174 1.9543 Total macrob. Animals 8.91 (b) 1.87 (a) 3.81 (b)

Page 263: Proc.CIAM2010[1]

Proceeding of Conf. on Industrial and Appl. Math., Bandung-Indonesia 2010

256

Table 5. Average biomass in g ash-free dry weight (g m-2) of macrobenthic fauna taxa at non-pond (station 1, 2 and 6) and pond sites (station 3, 4 and 5) at Sembilang peninsula in 2004 to 2006. Different ** after biomass values in the same row

(species of macrobenthic animals) indicate significant differences (ANOVA: p<0.05).

Figure 1 Sampling stations along the coast of Sembilang peninsula. Each station consists of three transect each with seven (2004), ten (2005 &2006) sampling points, plot 1 & 1 (100*100 m, 1996) at station 5.