Top Banner
Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks Rita Fuentes 1 , Alexander Poznyak 1 , Isaac Chairez 2 , and Tatyana Poznyak 3 1 CINVESTAV-IPN, Automatic Control Department, M´ exico, D.F. [email protected] 2 UPIBI-IPN, Bioelectronics Department, M´ exico, D.F. 3 ESIQIE-IPN, Postgraduete Division, M´ exico, D.F. Abstract. In this paper a strategy based on differential neural networks (DNN) for the identification of the parameters in a mathematical model described by partial differential equations is proposed. The identification problem is reduced to finding an exact expression for the weights dy- namics using the DNNs properties. The adaptive laws for weights ensure the convergence of the DNN trajectories to the PDE states. To inves- tigate the qualitative behavior of the suggested methodology, here the non parametric modeling problem for a distributed parameter plant is analyzed: the anaerobic digestion system Keywords: Neural Networks, Adaptive identification, Distributed Pa- rameter Systems, Partial Differential Equations and Practical Stability. 1 Introduction Many problems in science and engineering are reduced to a set of partial differen- tial equations (DES) through a process of mathematical modeling. For instance, linear second order parabolic partial differential equations (PDEs) appear in time dependent diffusion problems, such as the transient flow of heat conduc- tion . These equations define a state in both space and time. It is not easy to obtain their exact solutions, so numerical methods must be resorted to. There are a lot of techniques available such as the finite difference method (FDM) [1] and the finite element method (FEM) [2]. These numerical techniques, require large number of iterations in calculation and process the data in series. Besides, all those methods are well defined if the PDE structure is perfectly known. Actually, the most of suitable numerical solutions could be achieved just when the PDE is linear. Nevertheless, there are not so many methods to solve or approximate the PDE solution when its structure (even in linear case) is uncertain. It is well known that Radial Basis Function Neural Networks (RBFNN) and Multi-Layer Perceptrons (MLP) are universal approximators [3]: any continuous function defined on a compact set can be approximated to arbitrary accuracy by such neural networks [4]. Since the solutions to the PDE of interest are known to be uniformly continuous and the viable sets that arise in safety problems are often compact, neural networks seem like ideal candidates for approximating C. Alippi et al. (Eds.): ICANN 2009, Part II, LNCS 5769, pp. 552–562, 2009. c Springer-Verlag Berlin Heidelberg 2009
11

Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Jan 20, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Partial Differential Equations Numerical

Modeling Using Dynamic Neural Networks

Rita Fuentes1, Alexander Poznyak1, Isaac Chairez2, and Tatyana Poznyak3

1 CINVESTAV-IPN, Automatic Control Department, Mexico, [email protected]

2 UPIBI-IPN, Bioelectronics Department, Mexico, D.F.3 ESIQIE-IPN, Postgraduete Division, Mexico, D.F.

Abstract. In this paper a strategy based on differential neural networks(DNN) for the identification of the parameters in a mathematical modeldescribed by partial differential equations is proposed. The identificationproblem is reduced to finding an exact expression for the weights dy-namics using the DNNs properties. The adaptive laws for weights ensurethe convergence of the DNN trajectories to the PDE states. To inves-tigate the qualitative behavior of the suggested methodology, here thenon parametric modeling problem for a distributed parameter plant isanalyzed: the anaerobic digestion system

Keywords: Neural Networks, Adaptive identification, Distributed Pa-rameter Systems, Partial Differential Equations and Practical Stability.

1 Introduction

Many problems in science and engineering are reduced to a set of partial differen-tial equations (DES) through a process of mathematical modeling. For instance,linear second order parabolic partial differential equations (PDEs) appear intime dependent diffusion problems, such as the transient flow of heat conduc-tion . These equations define a state in both space and time. It is not easy toobtain their exact solutions, so numerical methods must be resorted to. There area lot of techniques available such as the finite difference method (FDM) [1] andthe finite element method (FEM) [2]. These numerical techniques, require largenumber of iterations in calculation and process the data in series. Besides, allthose methods are well defined if the PDE structure is perfectly known. Actually,the most of suitable numerical solutions could be achieved just when the PDEis linear. Nevertheless, there are not so many methods to solve or approximatethe PDE solution when its structure (even in linear case) is uncertain.

It is well known that Radial Basis Function Neural Networks (RBFNN) andMulti-Layer Perceptrons (MLP) are universal approximators [3]: any continuousfunction defined on a compact set can be approximated to arbitrary accuracy bysuch neural networks [4]. Since the solutions to the PDE of interest are knownto be uniformly continuous and the viable sets that arise in safety problems areoften compact, neural networks seem like ideal candidates for approximating

C. Alippi et al. (Eds.): ICANN 2009, Part II, LNCS 5769, pp. 552–562, 2009.c© Springer-Verlag Berlin Heidelberg 2009

Page 2: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Partial Differential Equations Numerical Modeling 553

viability problems. There are, however, relatively few works that exploit neu-ral networks to solve PDE. [5] proposed a method for solving PDE defined inorthogonal boxes, that relies upon an approximation composed by a MLP net-work added to the boundary condition. The method is illustrated by solvinga variety of model problems and comparing the result with the exact solution.Convergence is a difficult issue in this case. The results of [6] show that theirfeed-forward neural network (FFNN) method for solving an elliptic PDE in 2Drequired 1000 iterations for convergence. Another method for solving PDE ispresented in [7], based on a Multi- Quadric RBFNN. The proposed procedureshowed high accuracy of the solution. It is important to note, however, that theaccuracy of the RBFNN solution is heavily dependent on parameters such asthe “width” of basis functions for which there is no systematic method for de-termining their values. These approaches have to approximate PDE solutions byneural networks have been also been applied to some problems in control theory.In [8] a method like this is used to solve a class of first-order partial differentialequations that arise in input-to-state linearizable control systems. The solutionof the PDE, together with its Lie derivatives, yields a change of coordinatesrequired for feedback linearization. In [9] a method similar to the method of [5],has been successfully applied to steady-state heat transfer problems.

Within the NN framework, differential neural network (DNN) methodologyavoids many problems related to global extremum searching [10]. If mathematicalcontinuous model of the considered process is incomplete or partially known, theDNN methodology provides an effective instrument to analyze a wide range ofproblems in control theory such as identification, state estimation, trajectoriestracking and etc. [11]. Most of real systems are really difficult to be controlledbecause of the lack of information on its internal structure or-and their currentstates trajectories. The paper is organized as follows: in Section II, we introducea model given by a partial differential equation with unknown structure andformulate the problem. In Section III the DNN identifier is proposed. Somesimulation results are presented in Section IV to show its performance. SectionV finishes the paper with some particular conclusions.

2 Distributed Parameters Plant and NN Approximation

Let us consider the partial differential equation

ut (x, t) = f (u (x, t) , ux (x, t) , uxx (x, t)) (1)

for x ∈ (0, 1), t > 0, with boundary conditions:

u(0, t) = u0, u(x, 0) = c, ux(0, t) = 0 (2)

Let f (x, t) be a piecewise continuous in t. Suppose that the uncertain nonlinearfunction f (x, t) satisfies the Lipschitz condition ‖f (t, x) − f (t, y)‖ ≤ L ‖x − y‖,∀ x, y ∈ B := {x ∈ �n | ‖x − x0‖ ≤ r}, ∀ t ∈ [t0, t1], where L is constant and‖f‖2 = (f, f) just to ensure that there exists some δ > 0 such that the state

Page 3: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

554 R. Fuentes et al.

equation x = f (x, t) with x (t0) = x0 has a unique solution over [t0, t0 + δ] [12].The norm defined above in (8) is given in a Sobolev space.

Definition. Sobolev space [13], Hm,p (Ω): Let Ω be an open set in Rn and let

u ∈ Cm (Ω). Define a norm on u by

‖u‖m,p :=∑

0≤|α|≤m

⎝∫

Ω

|Dαu (x)|p dx

⎠1/p

, 1 ≤ p < ∞

This is the Sobolev norm in which the integration is in the Lebesgue sense. Thecompletion of u ∈ Cm (Ω): ‖u‖m,p < ∞ with respect to ‖·‖m,p is the Sobolevspace Hm,p (Ω). For p = 2, the Sobolev space is a Hilbert space.

Now lets consider a function h0 (·) in Hm,2(Ω). By [14], h0 (·) can be rewrittenas

h0 (x) =∑

i

j

θijΨijx, θij =

+∞∫

−∞h0 (x) Ψij (x) dx, ∀i, j ∈ Z

Last expression is called a function series expansion of h0 (x). Based on thisseries expansion, a neural network takes the following mathematical structure

h0 (x, θ) :=M2∑

i=M1

N2∑

j=N1

θijΨij (x) = ΘᵀW (x)

that can be used to approximate any nonlinear function h0 (x) ∈ S with theadequate selection of integers M1, M2, N1, N2 ∈ Z where

Θ = [θM1N1 , . . . , θM1N2 , . . . θM2N1 , . . . θM2N2 ]ᵀ

W (x) = [ΨM1N1 , . . . , ΨM1N2 , . . . ΨM2N1 , . . . ΨM2N2 ]ᵀ

Following the Stone Wiestrass Theorem [15], if ε (M1, M2, N1, N2) = h0 (x) −h0 (x, θ) is the NN approximation error, then for any arbitrary positive constantε there are some constants M1, M2, N1, N2 ∈ Z such that

‖ε (M1, M2, N1, N2)‖2 ≤ ε (3)

for all x ∈ X ⊂ �. In the case when x ∈ Xn ⊂ �n (x := [x1, x2, . . . , xn]ᵀ), theΨij argument (x) should be modify to (x, c) = cᵀx =

∑ni=1 xici with c ∈ Xn as

a weighting constant vector.

Remark 1. Appropriate selection of functions Ψij (·) is an important task to con-struct an adequate approximation of nonlinear functions. Many functions havebeen reported in literature [16] that have remarkable results to approximatenonlinear unknown functions. Which one is the most suitable basis in practicalapplication depends on each particular design specifications.

Page 4: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Partial Differential Equations Numerical Modeling 555

Remark 2. M1, M2, N1, N2 parameters in neural network design are closely re-lated to the quality approximation ε (M1, M2, N1, N2). Although, the NN hasbeen demonstrated to be effective to reproduce uncertain nonlinear functionswhich satisfies the Lipschitz condition.

Following the methodology of differential neural networks, we assume that thereexists a set of parameters W i,∗

1 ∈ �s1 , W i,∗2 ∈ �s2 , W i,∗

3 ∈ �s3 such that

ui-

t∫

0

(Aiui+

[W i,∗

1

]ᵀσ(xi)+

[W i,∗

2

]ᵀϕ(xi)ui−1-

[W i,∗

3

]ᵀγ(xi)ui−2-f i

)dτ=0

where the functions σ(ui) ∈ �s1 , ϕ(ui) ∈ �s2 , γ(ui) ∈ �s3 obey the followingsector conditions:

‖σ(vi) − σ(vi)‖2 ≤ Lσ ‖vi − vi‖2, ‖ϕ(vi) − ϕ(vi)‖2 ≤ Lϕ ‖vi − vi‖2

‖γ(vi) − γ(vi)‖2 ≤ Lγ ‖vi − vi‖2

which are bounded in U , i.e., ‖σ(·)‖2 ≤ σ+, ‖ϕ(·)‖2 ≤ ϕ+, ‖γ(·)‖2 ≤ γ+. Notethat, since one requires ∂u(x, t)/∂t in (1), the NN weights are selected to betime varying. However, here σ(xi), ϕ(xi), γ(xi) are NN activation vectors, nota set of eigen-functions. That is, the NN approximation property significantlysimplifies the specification of σ(·), ϕ(·), γ(·).

The terms f i, called modeling errors of each NN applied for the approach ofthe PDE, that is,

f i:=f i −(Aiui+

[W i,∗

1

]ᵀσ(xi)+

[W i,∗

2

]ᵀϕ(xi)ui−1+

[W i,∗

3

]ᵀγ(xi)ui−2

)

– Assumption 1. The modeling error f i satisfy the following group of inequal-ities: ∥∥∥f i

∥∥∥2

≤ f i0 ‖ui‖2 + f i

1 ‖ui−1‖2 + f i2 ‖ui−2‖2 + f i

3 (4)

and∥∥f i

∥∥2 ≤ F i0 ‖ui‖2+ F i

1 ‖ui−1‖2+F i2 ‖ui−2‖2+F i

3

∥∥Δi (t, x)∥∥2+F i

4 where∥∥f i

∥∥2:=∥∥∥f i + AiΔi (t, x)

∥∥∥2

– Assumption 2. The error modeling gradient is bounded as follows :∥∥∥∇xf i

∥∥∥2

≤ f i4 ‖ui‖2 + f i

5 ‖ui−1‖2 + f i6 ‖ui−2‖2 + f i

7 yielding to

∥∥∇xf i∥∥2 :=

∥∥∥∇xf i + AiΔix (t, x)

∥∥∥2

≤F i

5 ‖ui‖2 +F i6 ‖ui−1‖2 +F i

7 ‖ui−2‖2 +F i8

∥∥Δix (t, x)

∥∥2 +F i9

where Δi (t, x) := ui (x, t)−ui (x, t) and f ij , F i

k (j = 0, 7, k = 0, 9) are constants.

Page 5: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

556 R. Fuentes et al.

3 DNN Neural Identifier for Distributed Systems

Consider the following structure of adaptive identifier

d

dtui (x, t) =Aiui (x, t)+

[W i

1,t

]ᵀσ(ui)+[

W i2,t

]ᵀϕ(ui)ui−1 (x, t) +

[W i

3,t

]ᵀγ(ui)ui−2 (x, t)

(5)

∀i ∈ [3, N ] where Ai ∈ �− and where the variant in time matrices W i1,t ∈ �s1 ,

W i2,t ∈ �s2 , W i

3,t ∈ �s3 and ui (x, t) is the estimate of ui (x, t) . Satisfy thematrix differential equations

W i1 (t) = − 2

K1

N∑i=1

uᵀi (t, x)T iσ(ui) − 2

K1

N∑i=1

(Δi (t, x)

)ᵀP iσ(ui)

−αimW i

1 (t) − 2K1

N∑i=1

(Δi

x (t, x))ᵀ

Si∇xσ(ui)

W i2 (t) = − 2

K2

N∑i=1

uᵀi (t, x)T iϕ(ui)ui−1 (t, x) − αi

mW i2 (t)

− 2K2

N∑i=1

(Δi (t, x)

)ᵀP iϕ(ui)ui−1 (t, x) − 2

K2

N∑i=1

Δix (t, x)ᵀ

Si∇xϕ(ui)ui−1 (t, x)

W i3 (t) = − 2

K3

N∑i=1

uᵀi (t, x)T iγ(ui)ui−2 (t, x) − αi

mW i3 (t)

− 2K3

N∑i=1

(Δi

x (t, x))ᵀ

Si∇xγ(ui)ui−2 (t, x) − 2K3

N∑i=1

(Δi (t, x)

)ᵀP iγ(ui)ui−2 (t, x)

(6)with P i, Si and T i (i = 3, N) are positive definite solutions (P i > 0, Si > 0 andT i > 0) of the algebraic Riccati equations given by

P iAi+[Ai

]ᵀP i+P iΛi

αP i+λmax

([Λi

P

]−1)

F i3In×n+Qi

P = 0

SiAi +[Ai

]ᵀSi + SiΛi

SSi + λmax

([Λi

S

]−1F i

8In×n

)+ Qi

S = 0

T iAi +[Ai

]ᵀT i + T iΛi

T T i + QiT + λmax

([Λi

P

]−1)

F i0In×n+(

λmax

([Λi

S

]−1F i

5

)+λmax

([Λi

T

]−1)

f i0

)In×n=0

(7)

Special class of Riccati equation PA + AᵀP + PRP + Q = 0 has positivesolution if and only if [10] the following four conditions given below are fulfilled.

– Matrix A is stable,– Pair

(A, R1/2

)is controllable,

– Pair(Q1/2, A

)is observable,

– Matrices (A, Q, R) should be selected in such a way to satisfy the followinginequality 1

4

(AᵀR−1-R−1A

)R

(AᵀR−1-R−1A

)ᵀ+Q ≤ AᵀR−1A

Last condition restricts the largest eigenvalue of R avoiding the inexistence ofRiccati equation positive solution. State estimation problem for uncertain non-linear systems analyzed in this study, could be now stated as follows:

Page 6: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Partial Differential Equations Numerical Modeling 557

Problem Statement. Under the nonlinear system with an adequate selection ofmatrices Ai and with the neural network identifier structure supplied with theadjustment law (6) (including the selection of W ∗

i , i = 1, 2, 3), the upper boundfor the estimation error β defined as

β := limt→∞ ‖u (t, x) − u (t, x)‖2

P (8)

P > 0, P = P ᵀ ∈ Rn×n must be obtained, and if it is possible, to reduce to its

less achievable value, using any of the free parameters participating into the NNstructure.

The following definition and proposition are needed for the main results ofthe paper. Consider the following ODE nonlinear system

zt = g(zt, vt) + �t (9)

with zt ∈ �n, vt ∈ �m and �t an external perturbation or uncertainty such that‖�t‖2 ≤ �+

Definition 1 (Practical Stability). Assume that a time interval T and a fixedfunction v∗t ∈ �m over T are given. Given ε > 0, the nonlinear system (9) issaid to be ε-practically stable over T under the presence of �t if there exists aδ > 0 (δ depends on and the interval T ) such that ztεB[0, ε], ∀ tεT, wheneverzt0εB[0, δ].

Similarly to the Lyapunov stability theory for nonlinear systems, it was ap-plied the aforementioned direct method for the ε-practical stability of nonlinearsystems using-practical Lyapunov-like functions under the presence of externalperturbations and model uncertainties. Note that these functions have proper-ties differing significantly from the usual Lyapunov functions in classic stabilitytheory.

The following proposition requires the following Lemma.

Lemma 1. Let a nonnegative function V (t) sastisfying the following differentialinequality V (t) ≤ −αV (t) + β where α > 0 and β ≥ 0. Then[1 − μ

(√V (t)

)−1]

+

→ 0 with μ =√

β/α and the function [·]+ defined as

[z]+ :={

z if z ≥ 00 if z < 0

Proof. The proof of this Lemma can be found in [17].

Proposition 1. Given a time interval T and a function v (·) over a continuouslydifferentiable real-valued function V (z, t) satisfying V (0, t) = 0, ∀ tεT is said tobe ε−practical Lyapunov-like function over T under v if there exist a constantα > 0 such that V (z, t) ≤ −αV (z, t) + H (�+) with H a bounded non-negativenonlinear function with upper bound H+. Moreover, the trajectories of zt belongs

to the zone ε :=H+

αwhen t → ∞. In this proposition V (zt, t) denotes the

derivative of V (z, t) along zt, i.e., V (z, t) = Vz(z, t) · (g(zt, vt) + �t) + Vt(x, t).

Page 7: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

558 R. Fuentes et al.

Proof. The proof follows directly from Lemma 1.

Definition 2. Given a time interval T and a function v (·) over T , nonlin-ear system (9) is ε-practically stable, T under v if there exists an ε−practicalLyapunov-like function V (x, t) over T under v.

Theorem. Let be the non linear system described on PDE’s unknown andperturbed on the state and the output (1) with the conditions at the border ofDirichlet and Neumman type defined on (2). Moreover, suppose the structureof non-parametric adaptive identifier (5) whose parameters are adjusted as theadaptable law given in (6). If there exists matrices Qi

P , QiS and Qi

T positivedefined such that the Riccati equations (7) have positive definite solutions P i, Si

and T i (i = 3, N), then the following upper bound limt→∞ ‖ui (t, x) − ui (t, x)‖P ≤

ρ is ensured for the state nonparametric identification process where

ρ :=√

mini

(αim)−1

Nmaxi

(λmax

([Λi

P

]−1)

F i4 + λmax

([Λi

S

]−1)

F i9

)

+√

mini

(αim)−1

Nmaxi

(λmax

([Λi

T

]−1)

f i3

)

Moreover, the weights W1,t, W2,t and W3,t are bounded with the followingbounds ‖W1,t‖ ≤ K1ρ, ‖W2,t‖ ≤ K2ρ, ‖W3,t‖ ≤ K3ρ.

Proof. The detailed proof is given in the appendix.

4 Simulation Results

For the purpose of illustrating the main theoretical results derived in previoussections, here it is considered an anaerobic degradation system, which is realizedin a fixed bed reactor with a recirculation tank. The dynamics of the statevariables in this process are described by the following energy and mass balancePDE:

∂X1

∂t= (μ1 − εD)X1,

∂X2

∂t= (μ2 − εD)X2

∂S1

∂t=

Ez

H2

∂2S1

∂x2− D

∂S1

∂x− k1μ1X1

∂S2

∂t=

Ez

H2

∂2S2

∂x2− D

∂S2

∂x+ k1μ1X1 − k2μ2X2

μ1 =μ1,maxS1

S1 + KS1

, μ2 =μ2,maxS2

S2 + KS2 + S22

KI2

(10)

In this equations: x(·) ∈ [0, 1] , t [d] is the evolution time of the digestion pro-cess, Ez

[m2d−1

]is the dispersion axial coefficient, D

[d−1

]is the dilution fac-

tor, H [m] is the length of the reactor, X1

[gL−1

]is the acid-genic biomass,

X2

[gL−1

]is the metano-genic biomass, S1

[gL−1

]is the oxygen chemical de-

mand, S2

[gL−1

]is the volatile acid concentration and ε is the fraction of bactery

Page 8: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Partial Differential Equations Numerical Modeling 559

Fig. 1. Chemical Oxygen Demand dynamics. Numerical trajectory produced by themathematical model during 10 hours (a) and estimated trajectories produced by theDNN based identifier (b). Both trajectories are really close one each other except duringthe first 1 hour.

Fig. 2. Methangenic bacteria dynamics. Numerical trajectory produced by the math-ematical model during 10 hours (a) and estimated trajectories produced by the DNNbased identifier (b). Both trajectories are really close one each other except during thefirst 1 hour.

Page 9: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

560 R. Fuentes et al.

in the liquid phase. The biological reactor system (10) shows the following tra-jectories for the dissolved oxygen and the methangenic bacteria concentration(Figs. 1-a and 2-a). The initial conditions and boundary conditions used in thisnumerical simulations are u(0, t) = rand(1), u(x, 0) = 10, ux(0, t) = 0. Diffu-sion and velocity parameters (D and v) are selected as D = 0.001, v = 0.01,c = 0.0001. The DNN identifier for PDE produces trajectories very close tothe real trajectories of the reactor model as can be seen in Figures (1-b and2-b). There is an important zone where there exists a big difference betweenreal and estimated trajectories. This dissimilarity is dependent on the learningperiod required to adjust the DNN identifier. The difference between the realPDE trajectories and the estimated state produced by the DNN identifier is justperceptible during first seconds. This error is close to zero almost during all xand all t. This shows the efficiency of the identification process provided by theDNN algorithm.

5 Conclusions

In this paper a new methodology to identify a class of nonlinear distributed pa-rameter systems has been introduced. The suggested approach solves the prob-lem of non parametric identification of uncertain nonlinear described by partialdifferential equations. Asymptotic convergence for the identification error hasbeen demonstrated applying a Lyapunov-like analysis using a special class ofLyapunov functional. Besides, the same analysis leads to the generation of thecorresponding conditions for the upper bound of the weights involved in the iden-tifier structure. Learning algorithms for the adjustable weights have been intro-duced. Practical stability results have been obtained to generate upper boundsfor the weights trajectories. Numerical example showing an anaerobic dynam-ics demonstrates the workability of this new methodology based on continuousneural networks.

References

1. Smith, G.D.: Numerical solution of partial differential equations: finite differencemethods. Clarendon Press, Oxford (1978)

2. Hughes, T.J.R.: The finite element method. Prentice Hall, New Jersey (1987)

3. Haykin, S.: Neural Networks. A comprehensive Foundation. IEEE Press, New York(1994)

4. Cybenko, G.: Approximation by superposition of a sigmoidal function. Mathemat-ical Control Signals Systems 2, 303–314 (1989)

5. Lagaris, I.E., Likas, A., Fotiadis, D.I.: Artificial neural networks for solving ordi-nary and partial differential equations. IEEE Transactions on Neural Networks 9,987–1000 (1998)

6. Dissanayake, M.W.M.G., Phan-Thien, N.: Neural-network based approximationsfor solving partial differential equations. Communications in Numerical Methodsin Engineering 10, 195–201 (2000)

Page 10: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

Partial Differential Equations Numerical Modeling 561

7. Mai-Duy, N., Tran-Cong, T.: Numerical solution of differential equations unsingmultiquadric radial basis function networks. Neural Networks 14, 185–199 (2001)

8. He, S., Reif, K., Unbehauen, R.: Multilayer neural networks for solving a class ofpartial differential equations. Neural Networks 13, 385–396 (2000)

9. Jaime, E.: Approximation Analytique de la solution dequations differentielles par-tielles par reseau de neurons artificiels: Application a la Simulation Thermiquedans les Micro-Systemes. PhD thesis, Institut National des Scienes Appliquees deToulouse (2004)

10. Poznyak, A., Sanchez, E., Yu, W.: Differential Neural Networks for Robust Non-linear Control (Identification, state Estimation an trajectory Tracking). World Sci-entific, Singapore (2001)

11. Lewis, F.L., Yesildirek, A., Liu, K.: Multilayer neural-net robot controller withguaranteed tracking performance. IEEE Trans. Neural Netw. 7, 1–11 (1996)

12. Khalil, H.K.: Nonlinear systems. Prentice-Hall, Upper Saddle River (2002)13. Adams, R., Fournier, J.: Sobolev spaces., 2nd edn. Academic Press, New York

(2003)14. Delyon, B., Juditsky, A., Benveniste, A.: Accuracy analysis for wavelet approxima-

tions. IEEE Trans. Neural Network 6, 332–348 (1995)15. Cotter, N.E.: The stone-weierstrass theorem and its application to neural networks.

IEEE Transactions on Neural Networks 1, 290–295 (1990)16. Daubechies, I.: Ten lectures on Wavelets. SIAM, Philadelphia (1992)17. Poznyak, A.: Deterministic Output Noise Effects in Sliding Mode Observation. In:

Sliding Modes: From Principles to Implementation, ch. 3, pp. 123–146. IEEE Press,Los Alamitos (2001)

Appendix

Consider the Lyapunov-like functional

V (t) :=N∑

i=1

Vi (t, x)+3∑

r=1tr

{[W i

r (t)]ᵀ

KrWir (t)

}

Vi (t, x) :=∥∥Δi (t, x)

∥∥2

P i +∥∥Δi

x (t, x)∥∥2

Si + ‖ui (t, x)‖2T i

(11)

Following the procedure for the second Lyapunov method, the time derivativeof Vt is

V (t) = 2N∑

i=1

((Δi (t, x)

)ᵀ (t)P i d

dtΔi (t, x) +

3∑r=1

tr{[

W ir (t)

]ᵀKrW

ir (t)

})

+2N∑

i=1

([Δi

x (t, x)]ᵀ

Si d

dtΔi

x (t, x) + uᵀi (t, x)T i d

dtui (t, x)

)

(12)Using last results and the following matrix inequality XY ᵀ + Y Xᵀ ≤ XΛXᵀ

+ Y Λ−1Y ᵀ valid for any X, Y ∈ Rr×s and any 0 < Λ = Λᵀ ∈ Rs×s then bythe Riccati equations defined in (7) and in view of the adjust equations of theweights (6), the previous equality is simplified to

V (t) ≤ −αimV (t) +

N∑

i=1

(λmax

([Λi

S

]−1F i

9 +[Λi

T

]−1f i3 +

[Λi

P

]−1F i

4

))

Page 11: Partial Differential Equations Numerical Modeling Using Dynamic Neural Networks

562 R. Fuentes et al.

Taking the maximum value over i, we obtain

V (t) ≤ -mini

(αi

m

)V (t)+Nmax

i

(λmax

([Λi

S

]−1F i

9 +[Λi

T

]−1f i3 +

[Λi

P

]−1F i

4

))

Applying the Lemma 1, one has[1 − μ

(√V (t)

)−1]

+

→ 0 that completes the

proof.