Top Banner
Nonlinear Analysis: Modelling and Control, 2014, Vol. 19, No. 1, 1–25 1 Exponential synchronization for reaction-diffusion neural networks with mixed time-varying delays via periodically intermittent control * Qintao Gan a , Hong Zhang b , Jun Dong c a Department of Basic Science, Shijiazhuang Mechanical Engineering College Shijiazhuang 050003, China [email protected] b School of Foreign Language, Hebei University of Science and Technology Shijiazhuang 050000, China c State Key Laboratory of Complex Electromagnetic Environmental Effects on Electronics and Information System Luoyang 471003, China Received: 16 December 2012 / Revised: 27 April 2013 / Published online: 25 November 2013 Abstract. This paper deals with the exponential synchronization problem for reaction-diffusion neural networks with mixed time-varying delays and stochastic disturbance. By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically intermittent controller is first proposed to guarantee the exponential synchronization of reaction- diffusion neural networks with mixed time-varying delays and stochastic disturbance in terms of p-norm. The obtained synchronization results are easy to check and improve upon the existing ones. Particularly, the traditional assumptions on control width and time-varying delays are removed in this paper. This paper also presents two illustrative examples and uses simulated results of these examples to show the feasibility and effectiveness of the proposed scheme. Keywords: synchronization, neural networks, mixed time-varying delays, reaction-diffusion, periodically intermittent control. 1 Introduction In the past decade, there has been a great interest in various types of neural networks (for example, Hopfield neural networks, cellular neural networks, Cohen–Grossberg neu- ral networks, bidirectional associative memory neural networks, competitive neural net- works, etc.) due to their wide range of applications, such as signal processing, pattern recognition, image processing, associative memory, fault diagnosis, aerospace, defense, * This work was supported by the National Natural Science Foundation of China (grants No. 10671209 and No. 11071254) and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry. c Vilnius University, 2014
25

Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Sep 21, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Nonlinear Analysis: Modelling and Control, 2014, Vol. 19, No. 1, 1–25 1

Exponential synchronization for reaction-diffusion neuralnetworks with mixed time-varying delays via periodicallyintermittent control∗

Qintao Gana, Hong Zhangb, Jun Dongc

aDepartment of Basic Science, Shijiazhuang Mechanical Engineering CollegeShijiazhuang 050003, [email protected] of Foreign Language, Hebei University of Science and TechnologyShijiazhuang 050000, ChinacState Key Laboratory of Complex Electromagnetic Environmental Effectson Electronics and Information SystemLuoyang 471003, China

Received: 16 December 2012 / Revised: 27 April 2013 / Published online: 25 November 2013

Abstract. This paper deals with the exponential synchronization problem for reaction-diffusionneural networks with mixed time-varying delays and stochastic disturbance. By using stochasticanalysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodicallyintermittent controller is first proposed to guarantee the exponential synchronization of reaction-diffusion neural networks with mixed time-varying delays and stochastic disturbance in terms ofp-norm. The obtained synchronization results are easy to check and improve upon the existing ones.Particularly, the traditional assumptions on control width and time-varying delays are removed inthis paper. This paper also presents two illustrative examples and uses simulated results of theseexamples to show the feasibility and effectiveness of the proposed scheme.

Keywords: synchronization, neural networks, mixed time-varying delays, reaction-diffusion,periodically intermittent control.

1 Introduction

In the past decade, there has been a great interest in various types of neural networks(for example, Hopfield neural networks, cellular neural networks, Cohen–Grossberg neu-ral networks, bidirectional associative memory neural networks, competitive neural net-works, etc.) due to their wide range of applications, such as signal processing, patternrecognition, image processing, associative memory, fault diagnosis, aerospace, defense,

∗This work was supported by the National Natural Science Foundation of China (grants No. 10671209and No. 11071254) and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, StateEducation Ministry.

c© Vilnius University, 2014

Page 2: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

2 Q. Gan et al.

telecommunications, automatic control engineering, and combinatorial optimization.However, time delays are unavoidably in the information processing of neurons due tovarious reasons [1–3]. For example, time delays can be caused by the finite switchingspeed of amplifier circuits in neural networks or deliberately introduced to achieve tasksof dealing with motion-related problems, such as moving image processing. Meanwhile,a neural network usually has a spatial nature due to the presence of an amount of parallelpathways of a variety of axon sizes and lengths, it is desired to model them by introducingdistributed delays. Therefore, both discrete and distributed delays, especially both discreteand distributed time-varying delays, should be taken into account when modeling realisticneural networks [4–7].

It has been reported that if the parameters and mixed time-varying delays are ap-propriately chosen, the neural networks can exhibit complicated behaviors even withstrange chaotic attractors. Thus, the synchronization problems of chaotic neural networkswith mixed time-varying delays have received much more attention both in theory and inpractice due to its potential applications in various technological fields, including chaosgenerators design, secure communications, chemical and biological systems, informationprocessing, distributed computation, optics, social science, harmonic oscillation genera-tion, human heartbeat regulation, power system protection, and so on [8–12].

In signal transmission, the signal will become weak due to diffusion, so an externalcontrol should be added until the strength of the signal reaches an upper level. Then,the external control can be removed considering the cost. Therefore, in comparison withcontinuous control, discontinuous controllers, which include intermittent control and im-pulsive control, have attracted more interest due to its wide applications in engineeringfields [13]. Intermittent control, which was first introduced to control linear econometricmodels in [14], has been used for a variety of purposes such as manufacturing, transporta-tion and communication in practice. In [15–17], the synchronization problems for a classof chaotic neural networks with constant delay were investigated by designing period-ically intermittent controllers based on 2-norm. And then, the periodically intermittentcontrol was applied to deal with the synchronization of chaotic neural networks withouttime delays in [18] based on 2-norm. In [19], Yu et al. investigated the synchronizationproblem of Cohen–Grossberg neural networks with time-varying delays by designinga periodically intermittent controller based on ∞-norm. Moreover, a novel intermittentimpulsive synchronization scheme was proposed to realize synchronization of two chaoticdelayed neural networks in [20]. As pointed out by Hu et al. [21], the most previous resultspresented in [15–17] were obtained by constructing Lyapunov functions and using twocentral differential inequalities, and the restriction that the control width is greater than thetime delay was imposed in [15–17]. And the condition that the non-control width shouldbe greater than the time delay was also required in [16]. Evidently, the applied areas of theresults obtained in [15–17] are limited because of these assumptions. Therefore, in orderto reduce the possible conservatism for the sake of broader applications of the intermittentcontrol technique, under the precondition that the derivative of the time-varying delay wassmaller than one, Hu et al. [21] studied the exponential stabilization and synchronizationfor a class of neural networks with time-varying delays for the first time using a peri-odically intermittent control technique based on p-norm and ∞-norm, respectively. The

www.mii.lt/NA

Page 3: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 3

methods used in [21] were totally different from the corresponding previous works andthe obtained conditions were less conservative. Particularly, the traditional assumptionson control width and time delay were removed.

Actually, the synaptic transmission in real neural networks can be viewed as a noisyprocess introduced by random fluctuations from the release of neurotransmitters and otherprobabilistic causes [22–25]. Hence, noise is unavoidable and should be taken into consid-eration in modeling. On the other hand, diffusion effects cannot be avoided in the neuralnetworks when electrons are moving in asymmetric electromagnetic fields. So we mustconsider that the activations vary in space as well as in time. From the above analysis,the stochastic noise perturbation and diffusion effects on dynamic behaviors of neuralnetworks cannot be neglected, so the theoretical results on dynamic behaviors includingstochastic disturbance and diffusion parameters are more reasonable. With respect toreaction-diffusion neural networks with stochastic perturbation, a few results about thedynamic analysis have been reported in the literature [26–35].

To the best of our knowledge, there are few results, or even no results concerning thesynchronization issues for neural networks with mixed time-varying delays, stochasticnoise perturbation and reaction-diffusion in terms of p-norm by using periodically in-termittent control. The issues of integrating mixed time-varying delays, stochastic noiseperturbation and reaction-diffusion effects into the study of synchronization for neuralnetworks require more complicated analysis. Therefore, it is interesting to study thisproblem both in theories and applications.

Motivated by the above discussion, this paper is concerned with the exponential syn-chronization for reaction-diffusion neural networks with mixed time-varying delays andstochastic perturbation in terms of p-norm by using periodically intermittent control ap-proach. Some examples with numerical simulations are provided to show the feasibilityand effectiveness of the proposed method.

The main contribution of this paper can be summarized as follows:

1. It is the first time to establish the exponential synchronization criterion for reaction-diffusion neural networks with mixed time-varying delays and stochastic noiseperturbation based on periodically intermittent control.

2. Unlike the existing results of synchronization for reaction-diffusion neural net-works based on 2-norm (see [36–39]), some new and useful conditions are obtainedin this paper to guarantee the exponential synchronization of the proposed neuralnetworks under the periodically intermittent control in terms of p-norm.

3. The restrictions on periodically intermittent controller that the control width isgreater than the time delay and the non-control width is also greater than the timedelay are removed, which is more general than those periodically intermittent con-trollers given in [15–17].

4. A novel Lyapunov–Krasovskii functional is proposed and the restriction in [21] thatthe derivative of the time-varying delay should be smaller than one is removed.

5. In [40], the authors pointed out that it is quite difficult to find a chaotic attractorfor reaction-diffusion delayed neural networks. Obviously, this is an important

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 4: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

4 Q. Gan et al.

and interesting open problem. In this paper, by using the classical implicit formatsolving the partial differential equations and the method of steps for differentialdifference equations, we find that if the parameters are appropriately chosen, thereaction-diffusion neural networks can exhibit chaotic attractors.

The organization of this paper is as follows: in the next section, problem statementand preliminaries are presented; in Section 3, a periodically intermittent controller isproposed to ensure exponential synchronization of reaction-diffusion neural networkswith mixed time-varying delays and stochastic noise perturbation in terms of p-norm;numerical simulations will be given in Section 4 to demonstrate the effectiveness andfeasibility of our theoretical results. We ends this work with a conclusion in Section 5.

Notation. Throughout this paper, Rn and Rn×m denote the n dimensional Euclideanspace and the set of all n × m real matrices, respectively; the notation C 2,1(R+ ×Rn;R+) denotes the family of all nonnegative functions V (t, x(t)) on R+×Rn, which arecontinuously twice differentiable in x and once differentiable in t; (Ω,F ,P) is a completeprobability space, where Ω is the sample space, F is the σ-algebra of subsets of thesample space and P is the probability measure on F ; E· stands for the mathematicalexpectation operator with respect to the given probability measure P; “sgn” is the signfunction.

2 Problem statement and preliminaries

In this paper, we are concerned with a class of reaction-diffusion neural networks withmixed time-varying delays, which can be described by the following integro-differentialequations:

dui(t, x)

dt=

l∗∑k=1

∂xk

(Dik

∂ui(t, x)

∂xk

)− ciui(t, x) +

n∑j=1

aijfj(uj(t, x)

)

+

n∑j=1

bijgj(uj(t− τij(t), x)

)+

n∑j=1

dij

t∫t−τ∗

ij(t)

hj(uj(s, x)

)ds+ Ji, (1)

where i = 1, 2, . . . , n, n is the number of neurons in the neural networks; x = (x1, x2,. . . , xl∗)T ∈ Ω ⊂ Rl∗ and Ω = x = (x1, x2, . . . , xl∗)T | |xk| < mk, k = 1, 2, . . . , l∗is a bound compact set with smooth boundary ∂Ω and mesΩ > 0 in space Rl∗ ; u(t, x) =(u1(t, x), . . . , un(t, x))T with ui(t, x) corresponds to the state of the ith neural unit attime t and in space x; ci > 0 represents the decay rate of the ith neuron; aij , bij anddij are, respectively, the connection strength, the time-varying delay connection strength,and the distributed time-varying delay connection strength of the jth neuron on the ithneuron; fj(·), gj(·) and hj(·) denote the activation functions; Dik > 0 corresponds to thetransmission diffusion operator along the ith neuron; 0 < τij(t) 6 τ and 0 < τ∗ij(t) 6 τ∗

are the time-varying delay and the distributed time-varying delay along the axon of thejth unit from the ith unit, respectively; Ji denotes the bias of the ith neuron.

www.mii.lt/NA

Page 5: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 5

The boundary condition of system (1) is

ui(t, x)|∂Ω = 0, (t, x) ∈ [−τ ,+∞)× ∂Ω, (2)

and the initial value of system (1) is

ui(s, x) = φi(s, x), (s, x) ∈ [−τ , 0]×Ω, (3)

where τ = maxτ, τ∗, φ(s, x) = (φ1(s, x), . . . , φn(s, x))T ∈ C is bounded andcontinuous and C = C([−τ , 0] × Ω,Rn) be the Banach space of continuous functions,which maps [−τ , 0]×Ω into Rn with the topology of uniform converge and p-norm (p isa positive integer) defined by

‖φ‖p =

(∫Ω

n∑i=1

sup−τ6s60

∣∣φi(s, x)∣∣p dx

)1/p

.

Chaotic systems depend extremely on initial values, and even infinitesimal changes inthe initial condition will lead to an asymptotic divergence of orbits. In order to observethe synchronization behavior of system (1), we introduce another delayed neural network,which is the response system of the drive system (1). However, the initial condition of theresponse system is defined to be different from that of the drive system. Therefore, thecontrolled response system of network (1) can be described by the following equations:

dvi(t, x) =

l∗∑k=1

∂xk

(Dik

∂vi(t, x)

∂xk

)− civi(t, x) +

n∑j=1

aijfj(vj(t, x)

)

+

n∑j=1

bijgj(vj(t− τij(t), x)

)+

n∑j=1

dij

t∫t−τ∗

ij(t)

hj(vj(s, x)

)ds

+ Ji + wi(t, x)

dt+

n∑j=1

σij(ej(t, x), ej

(t− τij(t), x

))dωj(t), (4)

where i = 1, 2, . . . , n, v(t, x) = (v1(t, x), . . . , vn(t, x))T is an n-dimensional statevector of the neural networks; e(t, x) = (e1(t, x), . . . , en(t, x))T = v(t, x) − u(t, x)is the synchronization error signal; σ = (σij)n×n is the diffusion coefficient matrix (ornoise intensity matrix) and the stochastic disturbance ω(t) = [ω1(t), . . . , ωn(t)]T ∈ Rnis a Brownian motion defined on (Ω,F ,P), and

E

dω(t)

= 0, E

dω2(t)

= dt.

This type of stochastic perturbation can be regarded as a result from the occurrence ofthe internal error when the simulation circuits are constructed, such as inaccurate designof the coupling strength and some other important parameters [41], therefore, it relies on

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 6: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

6 Q. Gan et al.

the drive system (1). w(t, x) = (w1(t, x), . . . , wn(t, x))T is an intermittent controllerdefined by

wi(t, x) =

∑nj=1 kij(vj(t, x)− uj(t, x)), (t, x) ∈ [mT,mT + δ)×Ω,

0, (t, x) ∈ [mT + δ, (m+ 1)T )×Ω,(5)

where m = 0, 1, . . ., kij (i, j = 1, . . . , n) denotes the control strength, T > 0 denotes thecontrol period and 0 < δ < T is called the control width.

The boundary condition and initial condition for response system (4) are given in theforms

vi(t, x)|∂Ω = 0, (t, x) ∈ [−τ ,+∞)× ∂Ω, (6)and

vi(s, x) = ψi(s, x), (s, x) ∈ [−τ , 0]×Ω, (7)

where ψi(s, x) (i = 1, 2, . . . , n) are bounded and continuous on [−τ , 0]×Ω.Subtracting (1) from (4) yields the error system as follows:

dei(t, x) =

l∗∑k=1

∂xk

(Dik

∂ei(t, x)

∂xk

)− ciei(t, x) +

n∑j=1

aijf∗j

(ej(t, x)

)+

n∑j=1

bijg∗j

(ej(t− τij(t), x)

)+

n∑j=1

dij

t∫t−τ∗

ij(t)

h∗j(ej(s, x)

)ds

+ wi(t, x)

dt+

n∑j=1

σij(ej(t, x), ej

(t− τij(t), x

))dωj(t),

where

f∗j(ej(·, x)

)= fj

(vj(·, x)

)− fj

(uj(·, x)

),

g∗j(ej(·, x)

)= gj

(vj(·, x)

)− gj

(uj(·, x)

),

h∗j(ej(·, x)

)= hj

(vj(·, x)

)− hj

(uj(·, x)

).

In this paper, we give the following hypotheses:

(H1) We assume that there exist positive constants Lj , Mj and Nj such that the neuronactivation functions fj , gj and hj satisfy the following conditions:∣∣fj(vj)− fi(vj)∣∣ 6 Lj

∣∣vj − vj∣∣,∣∣gj(vj)− gi(vj)∣∣ 6Mj

∣∣vj − vj∣∣,∣∣hj(vj)− hi(vj)∣∣ 6 Nj∣∣vj − vj∣∣,

where vj , vj ∈ R (j = 1, 2, . . . , n).

(H2) Time-varying transmission delay τij(t) satisfies τij(t) 6 % < 1 or τij(t) > % > 1for all t, where % is a constant.

www.mii.lt/NA

Page 7: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 7

(H3) There exists a positive constant ηij (i, j = 1, 2, . . . , n) such that∣∣σij(v1, v1)− σij(v2, v2)∣∣2 6 ηij

(|v1 − v2|2 + |v1 − v2|2

)for any v1, v2, v1, v2 ∈ R, and

σij(0, 0) = 0, i, j = 1, 2, . . . , n.

Before ending this section, we introduce some notations, the notion of exponentialsynchronization for reaction-diffusion neural networks (1) and (4) under periodicallyintermittent controller (5) based on p-norm, and some lemmas, which will come intoplay later on.

For any u(t, x) = (u1(t, x), . . . , un(t, x))T ∈ Rn, define

∥∥u(t, x)∥∥p

=

(∫Ω

n∑i=1

∣∣ui(t, x)∣∣p dx

)1/p

.

Let PC = PC([−τ , 0] × Ω,Rn) denote the piecewise left continuous functions φ:[−τ , 0]×Ω → Rn with the norm

‖φ‖p =

(∫Ω

n∑i=1

sup−τ6s60

∣∣φi(s, x)∣∣p dx

)1/p

.

Definition 1. The reaction-diffusion neural networks (1) and (4) can be exponentiallysynchronized under the intermittent controller (5) based on p-norm, if there exist constantsµ > 0 and M > 1 such that

E∥∥v(t, x)− u(t, x)

∥∥p

6ME

‖ψ − φ‖p

e−µt

for (t, x) ∈ [0,+∞)×Ω.

Lemma 1. (See [42].) Let p > 2 be a positive integer, mk (k = 1, 2, . . . , l∗) be a positiveconstant, X be a cube |xk| 6 mk, and let h(x) be a real-valued function belonging toC1(Ω), which vanish on the boundary ∂Ω of Ω, i.e., h(x)|∂Ω = 0. Then∫

Ω

∣∣h(x)∣∣p dx 6

p2m2k

4

∫Ω

∣∣h(x)∣∣p−2

∣∣∣∣ ∂h∂xk∣∣∣∣2 dx.

Lemma 2. (See [43].) Assume that there exist two continuous functions f(x), g(x):[a, b]→ R, constants a, b, p and q satisfying

b > a, p, q > 1,1

p+

1

q= 1,

then the following inequality holds:b∫a

∣∣f(x)g(x)∣∣dx 6

[ b∫a

∣∣f(x)∣∣p dx

]1/p[ b∫a

∣∣g(x)∣∣q dx

]1/q

.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 8: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

8 Q. Gan et al.

3 Exponential synchronization criterion

In this section, suitable T , δ and kij are designed to realize exponential synchronizationbetween reaction-diffusion neural networks (1) and (4) under the periodically intermit-tent controller (5) in terms of p-norm. For convenience, the following denotations areintroduced.

Let

λi = p

[ci − aiiLi −

1

2(p− 1)ηii

]+

l∗∑k=1

4(p− 1)Dik

pm2k

−n∑j=1j 6=i

p−1∑l=1

|aij |pξlijLpζlijj − p− 1

2

n∑j=1j 6=i

p−2∑l=1

ηpξlijij − p− 1

2

n∑j=1

p−2∑l=1

ηpε∗lijij

−n∑j=1

p−1∑l=1

(|bij |pξ

∗lijM

pζ∗lijj + |dij |pξ

∗∗pijN

pζ∗∗pij

j

)−

n∑j=1j 6=i

|aji|pξpjiLpζpjii − p− 1

2

n∑j=1j 6=i

(ηpε(p−1)ji

ji + ηpεpjiji

)− τ∗

n∑j=1

ρ∗∗ij ,

ωi = pkii +

n∑j=1j 6=i

p−1∑l=1

|kij |p$lij +

n∑j=1j 6=i

|kji|p$pji ,

ρij =1

α|1− %|

[|bji|pξ

∗pjiM

pζ∗pjii +

p− 1

2

(ηpε∗(p−1)ji

ji + ηpε∗pjiji

)],

ρ∗ij = ρij − αρij sgn(1− %), ρ∗∗ij = (τ∗)p−1|dji|pξ∗∗pjiN

pζ∗∗pji

i ,

where 0 < α < 1; ξlij , ζlij , ξ∗lij , ζ∗lij , ξ

∗∗lij , ζ

∗∗lij , $lij , εlij and ε∗lij are nonnegative real

numbers and satisfy, respectively,p∑l=1

ξlij = 1,

p∑l=1

ζlij = 1,

p∑l=1

ξ∗lij = 1,

p∑l=1

ζ∗lij = 1,

p∑l=1

ξ∗∗lij = 1,

p∑l=1

ζ∗∗lij = 1,

p∑l=1

$lij = 1,

p∑l=1

εlij = 1 andp∑l=1

ε∗lij = 1.

In the following, we will give an assumption

(H4) λi − ωi −∑nj=1 ρij > 0 for i = 1, 2, . . . , n.

Consider the function

Fi(εi) = λi − ωi − εi −n∑j=1

ρijeεiτ ,

www.mii.lt/NA

Page 9: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 9

where εi > 0. It is easy to see that

F ′i (εi) = −1−n∑j=1

τρijeεiτ < 0, Fi(0) = λi − ωi −

n∑j=1

ρij > 0.

On the other hand, Fi(εi) is continuous on [0,+∞) and Fi(εi) → −∞ as εi → +∞,then exists a positive number εi such that Fi(εi) > 0 and Fi(εi) > 0 for εi ∈ (0, εi).Denoting ε = mini=1,...,nεi, then

Fi(ε) = λi − ωi − ε−n∑j=1

ρijeετ > 0.

It follows from the assumption (H4) that there exists a positive number θi such that

λi + θi −n∑j=1

ρij > 0

for all i = 1, 2, . . . , n. In a similar way, we have

Gi(ε) = λi + θi − ε−n∑j=1

ρijeετ > 0, (8)

and Gi(·) is decreasing.

Theorem 1. Under assumptions (H1)–(H4), the noise-perturbed response system (4) andthe drive system (1) can be exponentially synchronized under the periodically intermittentcontroller (5) based on p-norm with the exponential decay rate (εT − (T − δ)θ)/pT , ifthe following condition is also satisfied:

(H5) ε− (T − δ)θ/T > 0, where θ = maxi=1,...,nθi.

We put the proof of Theorem in Appendix.

Remark 1. Ma et al. [37] investigated the synchronization problem for a class of stochas-tic reaction-diffusion neural networks with time-varying delays and Dirichlet boundaryconditions in terms of 2-norm by using linear feedback control under the preconditionthat the derivative of the time-varying delay was smaller than one. Zhao and Deng studiedthe exponential synchronization of reaction-diffusion neural networks with continuouslydistributed delays and stochastic influence in terms of 2-norm based on adaptive con-trol in [44]. In [40], by using the Lyapunov functional method, many real parametersand inequality techniques, the global exponential synchronization for a class of delayedreaction-diffusion cellular neural networks with Dirichlet boundary conditions in terms of2k-norm (integer k > 0) was discussed. In contrast, our results are derived consideringthe model with both discrete and distributed time-varying delays based on p-norm (p > 2),which have more general application ranges. In this paper, this problem is concernedand some central criteria are derived by designing periodically intermittent controller.Our results are more general and they effectually complement or improve the previouslyknown results in the literature where only p = 2 or 2k were considered.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 10: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

10 Q. Gan et al.

Remark 2. As pointed out in [21], there are few results concerning the robust stability androbust synchronization schemes for complex networks, in particular stochastic complexnetworks and reaction-diffusion complex networks based on p-norm and∞-norm usingintermittent control. This motivates us to write this paper. It is the first time to establishthe exponential synchronization criterion for neural networks with mixed time-varyingdelays, stochastic noise perturbation and reaction-diffusion effects in terms of p-norm. Inthis paper, the periodically intermittent control is generalized to study a more reasonableneural network model and the traditional restrictions in [15–17] that δ > τ and T −δ > τare removed.

Remark 3. In Theorem 3, a novel Lyapunov–Krasovskii functional V (t, x) is employedto deal with the reaction-diffusion neural networks with mixed time-varying delaysand stochastic perturbation. In the novel Lyapunov–Krasovskii functional (A.1), theintegral term

∫ tt−τ Vi(s, x) ds is divided into two parts as ρij

∫ tt−τij(t)

Vi(s, x) ds andρ∗ij∫ t−τij(t)

t−τ Vi(s, x) ds, where ρ∗ij is chosen as ρij − αρij sgn(1 − %) with 0 < α < 1,such a newly introduced variable may lead to potentially less conservative results on theupper bound of the time derivative of time-varying delay.

Remark 4. The results in this paper show that, the exponential synchronization criteriaon reaction-diffusion neural networks are dependent of time-varying delays, diffusioneffects and stochastic noise fluctuations. Furthermore, we can see a very interesting fact,that is, as long as diffusion coefficients Dik in the system is large enough, then the as-sumption (H4) always can satisfy. This shows that under the boundary conditions (2) and(6), a large enough diffusion always may make the reaction-diffusion neural networks (1)and (4) globally exponentially synchronous under the intermittent controller (5) withcondition (H5).

4 Numerical examples

In this section, by using the classical implicit format and the method of steps for differen-tial difference equations, we give some examples with numerical simulations to illustratethe effectiveness of the theoretical results obtained above.

Example 1. For the sake of simplification, we consider a reaction-diffusion neural net-work model described by

dui(t, x)

dt= Di

∂2ui(t, x)

∂x2− ciui(t, x) +

2∑j=1

aijfj(uj(t, x)

)+

2∑j=1

bijgj(uj(t− τ(t), x)

)+

2∑j=1

dij

t∫t−τ∗(t)

hj(uj(s, x)

)ds, (9)

where i = 1, 2, fi(ui) = 0.5(|ui + 1| − |ui − 1|) and gi(ui) = hi(ui) = tanh(ui).Clearly, it can be seen that the hypothesis (H1) is satisfied with Li = Mi = Ni = 1

www.mii.lt/NA

Page 11: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 11

(i = 1, 2). The parameters of (9) are assumed that c1 = c2 = 1, a11 = 0.2, a12 = −1,a21 = −0.3, a22 = 1, b11 = −1, b12 = −1.5, b21 = −0.2, b22 = −2, d11 = 0.8,d12 = 0.1, d21 = −0.5, d22 = −1, D1 = 0.1, D2 = 0.2, τ(t) = 0.7 + 0.1 sin(t) andτ∗(t) = 1.3 + 0.6 cos(t). The initial condition of drive system (9) is chosen as

u1(s, x) = 0.5

(1 +

s− τ(s)

π

)sin

(x

π

),

u2(s, x) = 0.3

(1 +

s− τ(s)

π

)sin

(x

π

),

(10)

where (s, x) ∈ [−1.9, 0] × Ω. The reaction-diffusion neural network (9) with boundarycondition (2) and initial condition (10) exhibits a chaotic behavior as shown in Fig. 1.

The noise-perturbed response system is described by

dvi(t, x) =

Di∂2vi(t, x)

∂x2− civi(t, x) +

2∑j=1

aijfj(vj(t, x)

)

+

2∑j=1

bijgj(vj(t− τ(t), x)

)+

2∑j=1

dij

t∫t−τ∗(t)

hj(vj(s, x)

)ds

+ wi(t, x)

dt+

2∑j=1

σij(ej(t, x), ej

(t− τ(t), x

))dωj(t), (11)

where

σ11 = 0.1e1(t, x) + 0.2e1

(t− τ(t), x

), σ12 = 0,

σ21 = 0, σ22 = 0.2e2(t, x) + 0.3e2

(t− τ(t), x

).

Meanwhile, we set the initial condition for response system (11) as follows:

v1(s, x) = 0.1

(1 +

s− τ(s)

π

)sin

(x

π

),

v2(s, x) = 0.6

(1 +

s− τ(s)

π

)sin

(x

π

)for (s, x) ∈ [−1.9, 0]×Ω.

By simple computation, we obtain that τ(t) 6 % = 0.1 < 1, m1 = 5, τ = 0.8, τ∗ =1.9, τ = 1.9, η11 = 0.04, η12 = η21 = 0 and η22 = 0.09. In addition, for convenience,we only consider the case p = 2. Choosing k11 = −7, k12 = 0, k21 = 0, k22 = −12,α = 0.95 and ξlij = ζlij = ξ∗lij = ζ∗lij = ξ∗∗lij = ζ∗∗lij = $lij = εlij = ε∗lij = 1/2 forl, i, j = 1, 2, then

λ1 = −7.865, λ2 = −9.135, ω1 = −14, ω2 = −24,

ρ11 = 1.2163, ρ12 = 0.2339, ρ21 = 1.7543, ρ22 = 2.4444,

ρ∗11 = 0.0608, ρ∗12 = 0.0117, ρ∗21 = 0.0887, ρ∗22 = 0.1222.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 12: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

12 Q. Gan et al.

0

100

200

300

−5

0

5

−5

0

5

t−axisx−axis

u1−

axis

0

100

200

300

−5

0

5

−6

−4

−2

0

2

4

t−axisx−axis

u2−

axis

Fig. 1. Chaotic attractor of neural network model (9).

Fig. 2. Synchronization errors e1(t, x) and e2(t, x) between systems (9) and (11).

Hence, ε1 = 1.4785, ε2 = 1.4518 and θ > 23.9997 by computation. Therefore, ε =1.4518 and δ > 18.7902 when T = 20 and θ = 24 are taken in virtue of assumption (H5).Select δ = 19, then (H5) is also satisfied. According to Theorem 3, which implies thatsystems (9) and (11) are exponential synchronized in terms of p-norm as shown in Fig. 2.

Example 2. Consider the neural networks with four dynamical nodes:

dui(t, x)

dt= Di

∂2ui(t, x)

∂x2− ciui(t, x) +

4∑j=1

aijfj(uj(t, x)

)+

4∑j=1

bijgj(uj(t− τ(t), x

))+

4∑j=1

dij

t∫t−τ∗

hj(uj(s, x)

)ds+ Ji, (12)

where i = 1, 2, 3, 4, fi(ui) = 0.5(|ui + 1|− |ui− 1|), gi(ui) = arctan(ui) and hi(ui) =tanh(ui). Similarly, we can derive that the hypothesis (H1) is satisfied with Li = Mi =Ni = 1 (i = 1, 2, 3, 4). The parameters of (12) are assumed that ci = Di = 1, Ji = 0.1(i = 1, 2, 3, 4), a11 = −1, a12 = 1, a13 = 0.5, a14 = −0.5, a21 = −0.3, a22 = −1,a23 = −1, a24 = −1, a31 = 0.6, a32 = 0.1, a33 = −1, a34 = −1, a41 = −0.2,

www.mii.lt/NA

Page 13: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 13

0

100

200

300

−5

0

5

−40

−30

−20

−10

0

10

t−axisx−axis

u1−

axis

0

100

200

300

−5

0

5

−20

−15

−10

−5

0

5

t−axisx−axis

u2−

ax

is

0

100

200

300

−5

0

5

−50

−40

−30

−20

−10

0

10

t−axisx−axis

u3−

axis

0

100

200

300

−5

0

5

−6

−4

−2

0

2

4

t−axisx−axis

u4−

ax

is

Fig. 3. Chaotic attractor of neural network model (12).

a42 = 0.1, a43 = −0.1, a44 = −0.5, b11 = 1 , b12 = −0.1, b13 = −1, b14 = −0.1,b21 = −0.2, b22 = −1, b23 = −0.5, b24 = 0.1, b31 = 0.1, b32 = −0.1, b33 = 0.3,b34 = −0.4, b41 = −0.3, b42 = 0.1, b43 = −0.3, b44 = −0.5, d11 = 1.2, d12 = 0.1,d13 = 0.5, d14 = −0.4,d21 = −0.5, d22 = 0.5, d23 = −0.5, d24 = −0.1, d31 = −0.1,d32 = 0.6, d33 = 0.3, d34 = −0.4, d41 = −0.1, d42 = 0.1, d43 = 0.2, d44 = −0.3,τ(t) = 1.1t+ 0.1 and τ∗ = 0.5. The initial condition of system (12) is chosen as

u1(s, x) =0.5

(1 +

s− τ(s)

π

)sin

(x

π

),

u2(s, x) =0.3

(1 +

s− τ(s)

π

)sin

(x

π

),

u3(s, x) =0.2

(1 +

s− τ(s)

π

)cos

(x

π

),

u4(s, x) =0.6

(1 +

s− τ(s)

π

)cos

(x

π

),

(13)

where (s, x) ∈ (−∞, 0]× Ω. The reaction-diffusion neural network (12) with boundarycondition (2) and initial condition (13) exhibits a chaotic behavior as shown in Fig. 3.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 14: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

14 Q. Gan et al.

The corresponding response system can be given as

dvi(t, x) =

Di∂2vi(t, x)

∂x2− civi(t, x) +

4∑j=1

aijfj(vj(t, x)

)+

4∑j=1

bijgj(vj(t− τ(t), x

))+

4∑j=1

dij

t∫t−τ

hj(vj(s, x)

)ds

+ Ji + wi(t, x)

dt+

4∑j=1

σij(ej(t, x), ej

(t− τ(t), x)

)dωj(t), (14)

where

σii = 0.1ei(t, x) + 0.2ei(t− τ(t), x

), σij = 0, i 6= j, i, j = 1, 2, 3, 4.

Meanwhile, the initial condition for response system (14) is given by:

v1(s, x) = 0.1

(1 +

s− τ(s)

π

)sin

(x

π

),

v2(s, x) = 0.6

(1 +

s− τ(s)

π

)sin

(x

π

),

v3(s, x) = 0.8

(1 +

s− τ(s)

π

)cos

(x

π

),

v4(s, x) = 0.2

(1 +

s− τ(s)

π

)cos

(x

π

),

for (s, x) ∈ (−∞, 0]×Ω.In this case, τ(t) > % = 1.1 > 1. Choosing k11 = −5, k22 = −8, k33 = −10,

k44 = −12, kij = 0 (i 6= j, i, j = 1, 2, 3, 4), T = 30 and δ = 29, similar to Example 1,it follows from Fig. 4 that systems (12) and (14) are exponential synchronized in terms ofp-norm.

Remark 5. In [45], the authors studied the globally exponential synchronization for aclass of reaction-diffusion neural networks with discrete variable delays and finite dis-tributed constant delays based on periodically intermittent control under Dirichlet bound-ary conditions. Theorem 1 in [45] cannot be used to study this example with τ(t) > 1for all t (fast-varying delay). However, after a simple computation, the conditions ofTheorem 3 hold. The numerical simulations clearly verify the effectiveness of the devel-oped periodically intermittent controller to the exponential synchronization of reaction-diffusion neural networks with mixed time-varying delays and stochastic perturbationbased on p-norm.

Remark 6. Note that the activation functions in [38, 46–50] are required to satisfy thecondition

0 6fj(vj)− fi(vj)

vj − vj6 Lj ,

for any vj , vj ∈ R (j = 1, 2, . . . , n).

www.mii.lt/NA

Page 15: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 15

Fig. 4. Synchronization errors e1(t, x), e2(t, x), e3(t, x) and e4(t, x) between systems (12) and (14).

Obviously, this condition is stranger than the Lipstchizian condition in (H1). Hence,the results in [38, 46–50] are unavailable for this example.

5 Conclusion

In this paper, a periodically intermittent controller has been proposed to ensure the expo-nential synchronization for a class of reaction-diffusion neural networks with mixed time-varying delays and stochastic noise perturbation under Dirichlet boundary conditions interms of p-norm. The problem considered in this paper is more general in many aspectsand incorporates as special cases various problems, which have been studied extensivelyin the literature. Some remarks and numerical examples have been used to demonstratethe effectiveness of the obtained results.

It should be pointed out that there are many published papers focusing on the synchro-nization problems of chaotic neural networks, but mixed time-varying delays, stochasticperturbation and reaction-diffusion effects have never been taken into consideration interms of the synchronization issue based on p-norm for a variety of neural networks.To the best knowledge of the authors, this is the first paper incorporating mixed time-varying delays, stochastic perturbation and reaction-diffusion effects into the problem ofexponential synchronization for chaotic neural networks under periodically intermittentcontrol in terms of p-norm.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 16: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

16 Q. Gan et al.

From condition (H4) in Theorem 3, we see that as long as feedback strength parameterkii (i = 1, 2, . . . , n) is chosen small enough, then condition (H4) always holds. Therefore,we can obtain that there always exists an appropriate periodically intermittent controlinput strategy in response system (4) at all time such that drive-response systems (1) and(4) with boundary conditions (2) and (6) and initial conditions (3) and (7) are globalexponential synchronization. However, the decomposing way used in (A.5)–(A.10) andthe inequality technique used in (A.11) maybe increase the conservatism of the criteria onthe upper bound of feedback strength kii. So, this is a problem that we should study inthe further.

In fact, due to the different parameters, activation functions and neural network archi-tectures, which is unavoidable in real implementation, the master system and responsesystem are not identical and the resulting synchronization is not exact and complex.Therefore, it is important and challenging to study the synchronization problems of non-identical chaotic neural networks. Furthermore, our analysis is carried out under theassumption p > 2 throughout this paper. Evidently, there is an interesting open problemconcerning the exponential synchronization for non-identical reaction-diffusion neuralnetworks with mixed time-varying delays and stochastic noise disturbance by using peri-odically intermittent control for p = 1 or based on∞-norm. This will become our futureinvestigative direction.

Appendix. Proof of Theorem 3

Define the Lyapunov–Krasovskii functional V (t, x) ∈ C2,1(R+ × Rn;R+) as

V (t, x) =

∫Ω

n∑i=1

[Vi(t, x) + eετ

n∑j=1

ρij

t∫t−τij(t)

Vi(s, x) ds

+ eετn∑j=1

ρ∗ij

t−τij(t)∫t−τ

Vi(s, x) ds+

n∑j=1

ρ∗∗ij

0∫−τ∗

ij(t)

t∫t+s

Vi(η, x) dη ds

]dx, (A.1)

whereVi(t, x) = Vi

(t, e(t, x)

)= eεt

∣∣ei(t, x)∣∣p, i = 1, 2, . . . , n.

By the Itô’s differential formula, we have the following stochastic differential:

dV(t, e(t, x)

)= LV

(t, e(t, x)

)dt+ Ve

(t, e(t, x)

)σ(t) dω(t), (A.2)

where

LV(t, e(t, x)

)= Vt

(t, e(t, x)

)+ Ve

(t, e(t, x)

+1

2trace

[σT(t)Vee

(t, e(t, x)

)σ(t)

],

Vt(t, e(t, x)

)=∂V (t, e(t, x))

∂t,

www.mii.lt/NA

Page 17: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 17

Ve(t, e(t, x)

)=

(∂V (t, e(t, x))

∂e1, . . . ,

∂V (t, e(t, x))

∂en

),

Vee(t, e(t, x)) =

(∂2V (t, e(t, x))

∂ei∂ej

)n×n

,

Φ = (Φ1, . . . , Φn),

Φi = −ciei(t, x) +

n∑j=1

aijf∗j

(ej(t, x)

)+

n∑j=1

bijg∗j

(ej(t− τij(t), x

))

+

n∑j=1

dij

t∫t−τ∗

ij(t)

h∗j(ej(s, x)

)ds+ wi(t, x).

It follows from (A.2) the Dini derivation, it can be deduced that

D+EV (t, x)

=

∫Ω

n∑i=1

εVi(t, x) + peεt

∣∣ei(t, x)∣∣p−1

×

[−ci∣∣ei(t, x)

∣∣+

n∑j=1

aijf∗j

(∣∣ej(t, x)∣∣)+

n∑j=1

bijg∗j

(∣∣ej(t− τij(t), x)∣∣)

+

n∑j=1

dij

t∫t−τ∗

ij(t)

h∗j(∣∣ej(s, x)

∣∣) ds+

n∑j=1

kij∣∣ej(t, x)

∣∣]

+ eετn∑j=1

ρij[Vi(t, x)−

(1− τij(t)

)Vi(t− τij(t), x

)]+ eετ

n∑j=1

ρ∗ij[(

1− τij(t))Vi(t− τij(t), x

)− Vi(t− τ, x)

]

+

n∑j=1

ρ∗∗ij

[τ∗ij(t)Vi(t, x)−

t∫t−τ∗

ij(t)

Vi(s, x) ds

]

+p(p− 1)

2eεt∣∣ei(t, x)

∣∣p−2n∑j=1

ρ2ij

(∣∣ej(t, x)∣∣, ∣∣ej(t− τij(t), x)∣∣)dx

+

∫Ω

n∑i=1

peεt∣∣ei(t, x)

∣∣p−1l∗∑k=1

∂xk

(Dik

∂|ei(t, x)|∂xk

)dx (A.3)

for (t, x) ∈ [mT,mT + δ)×Ω.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 18: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

18 Q. Gan et al.

From the boundary conditions (2), (6) and Lemma 1, we can obtain [42]

p

∫Ω

∣∣ei(t, x)∣∣p−1

l∗∑k=1

∂xk

(Dik

∂|ei(t, x)|∂xk

)dx

6 −l∗∑k=1

4(p− 1)Dik

pm2k

∫Ω

∣∣ei(t, x)∣∣p dx. (A.4)

Furthermore, it follows from (H1) and the fact

ap1 + ap2 + · · ·+ app > pa1a2 · · · ap, ai > 0, i = 1, 2, . . . , p,

that

p∣∣ei(t, x)

∣∣p−1n∑

j=1,j 6=i

aijf∗j

(∣∣ej(t, x)∣∣)

6 p∣∣ei(t, x)

∣∣p−1n∑j=1j 6=i

|aij |Lj∣∣ej(t, x)

∣∣=

n∑j=1j 6=i

p

[p−1∏l=1

(|aij |ξlijL

ζlijj

∣∣ei(t, x)∣∣)](|aij |ξpijLζpijj

∣∣ej(t, x)∣∣)

6n∑j=1j 6=i

p−1∑l=1

|aij |pξlijLpζlijj

∣∣ei(t, x)∣∣p +

n∑j=1j 6=i

|aij |pξpijLpζpijj

∣∣ej(t, x)∣∣p. (A.5)

Similarly, we have

p∣∣ei(t, x)

∣∣p−1n∑j=1

bijg∗j

(∣∣ej(t−τij(t), x)∣∣)6

n∑j=1

p−1∑l=1

|bij |pξ∗lijM

pζ∗lijj

∣∣ei(t, x)∣∣p+

n∑j=1

|bij |pξ∗pijM

pζ∗pijj

∣∣ej(t−τij(t), x)∣∣p, (A.6)

p∣∣ei(t, x)

∣∣p−1n∑j=1

dij

t∫t−τ∗

ij(t)

h∗j(∣∣ej(s, x)

∣∣) ds

6n∑j=1

p−1∑l=1

|dij |pξ∗∗lijN

pζ∗∗lij

j

∣∣ei(t, x)∣∣p

+

n∑j=1

|dij |pξ∗∗pijN

pζ∗∗pij

j

[ t∫t−τ∗

ij(t)

∣∣ej(s, x)∣∣ds]p, (A.7)

www.mii.lt/NA

Page 19: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 19

p∣∣ei(t, x)

∣∣p−1n∑j=1j 6=i

kij∣∣ej(t, x)

∣∣6

n∑j=1j 6=i

p−1∑l=1

|kij |p$lij∣∣ei(t, x)

∣∣p +

n∑j=1j 6=i

|kij |p$pij∣∣ej(t, x)

∣∣p, (A.8)

p∣∣ei(t, x)

∣∣p−2n∑j=1j 6=i

ηij∣∣ej(t, x)

∣∣2

6n∑j=1j 6=i

p−2∑l=1

ηpεlijij

∣∣ei(t, x)∣∣p +

n∑j=1j 6=i

(ηpε(p−1)ij

ij + ηpεpijij

)∣∣ej(t, x)∣∣p (A.9)

and

p∣∣ei(t, x)

∣∣p−2n∑j=1

ηij∣∣ej(t− τij(t), x)∣∣2

6n∑j=1

p−2∑l=1

ηpε∗lijij

∣∣ei(t, x)∣∣p +

n∑j=1

(ηpε∗(p−1)ij

ij + ηpε∗pijij

)∣∣ej(t− τij(t), x)∣∣p. (A.10)

By applying (A.4)–(A.10), assumptions (H2)–(H3) and Lemmas 1–2 to (A.3), wehave

D+EV (t, x)

6 E

∫Ω

n∑i=1

[ε− p

(ci − aiiLi − kii −

1

2(p− 1)ηii

)

−l∗∑k=1

4(p− 1)Dik

pm2k

+

n∑j=1j 6=i

p−1∑l=1

(|aij |pξlijL

pζlijj + |kij |p$lij

)

+

n∑j=1

p−1∑l=1

(|bij |pξ

∗lijM

pζ∗lijj + |dij |pξ

∗∗pijN

pε∗∗pijj

)+p− 1

2

n∑j=1j 6=i

p−2∑l=1

ηpξlijij +

p− 1

2

n∑j=1

p−2∑l=1

(ηpε∗lijij + η

pε∗∗lijij

)]Vi(t, x)

+

[n∑j=1j 6=i

(|aij |pξpijL

pζpijj + |kij |p$pij

)+p− 1

2

n∑j=1j 6=i

(ηpε(p−1)ij

ij + ηpεpijij

)]Vj(t, x)

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 20: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

20 Q. Gan et al.

+

n∑j=1

ρ∗∗ij

[τ∗Vi(t, x)−

t∫t−τ∗

ij(t)

Vi(s, x) ds

]

+

n∑j=1

|dij |pξ∗∗pijN

pζ∗∗pij

j

[ t∫t−τ∗

ij(t)

∣∣ej(s, x)∣∣ds]p

+

n∑j=1

[|bij |pξ

∗pijM

pζ∗pijj +

p− 1

2

(ηpε∗(p−1)ij

ij + ηpε∗pijij

)]eετVj

(t− τij(t), x

)+ eετ

n∑j=1

ρijVi(t, x) + eετn∑j=1

(−αρij |1− %|

)Vi(t− τij(t), x)

dx

= −E

∫Ω

n∑i=1

[λi − ωi − ε−

n∑j=1

ρijeετ

]Vi(t, x) dx

6 0, (A.11)

which implies thatEV (t, x)

6 E

V (mT, x)

(A.12)

for (t, x) ∈ [mT,mT + δ)×Ω.Similarly, for (t, x) ∈ [mT + δ, (m+ 1)T )×Ω, we can get

D+EV (t, x)

6 −

∫Ω

n∑i=1

[λi + θi − ε− eετ

n∑j=1

ρij

]Vi(t, x) dx+

∫Ω

n∑i=1

θiVi(t, x) dx

6∫Ω

n∑i=1

θVi(t, x) dx,

which leads to

EV (t, x)

6 E

V (mT + δ, x) exp

θ(t−mT − δ)

for (t, x) ∈ [mT + δ, (m+ 1)T )×Ω.

Combining these two cases, we summarize that:

(i) For (t, x) ∈ [0, δ)×Ω, it follows from (A.12) that

EV (t, x)

6 E

V (0, x)

.

(ii) For (t, x) ∈ [δ, T )×Ω, we have

EV (t, x)

6 E

V (δ, x) exp

θ(t− δ)

6 E

V (0, x) exp

θ(t− δ)

.

www.mii.lt/NA

Page 21: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 21

(iii) For (t, x) ∈ [T, T + δ)×Ω, we get

EV (t, x)

6 E

V (T, x)

6 E

V (0, x) exp

θ(T − δ)

.

(iv) For (t, x) ∈ [T + δ, 2T )×Ω, we know

EV (t, x)

6 E

V (T+δ, x) exp

θ(t−T−δ)

6 E

V (0, x) exp

θ(t−2δ)

.

Repeating this procedure, we obtain that for (t, x) ∈ [mT,mT + δ)×Ω,

EV (t, x)

6 E

V (mT, x)

6 E

V (0, x) exp

mθ(T − δ)

. (A.13)

Moreover, in the case of (t, x) ∈ [mT + δ, (m+ 1)T )×Ω, we have

EV (t, x)

6 E

V (mT + δ, x) exp

θ(t−mT − δ)

6 E

V (0, x) exp

θ(t− (m+ 1)δ)

. (A.14)

If (t, x) ∈ [mT,mT + δ)×Ω, we have m 6 t/T , then it follows from (A.13) that

EV (t, x)

6 E

V (0, x) exp

(T − δ)θ

Tt

. (A.15)

Similarly, if mT + δ 6 t < (m + 1)T , we have t/T < m + 1, then it followsfrom (A.14) that (A.15) holds for (t, x) ∈ [mT + δ, (m + 1)T ) × Ω. Hence, for any(t, x) ∈ [0,+∞)×Ω, (A.15) always holds.

Note that

EV (0, x)

= E

∫Ω

n∑i=1

[Vi(0, x) + eετ

n∑j=1

ρij

0∫−τij(0)

Vi(s, x) ds

+ eετn∑j=1

ρ∗ij

−τij(0)∫−τ

Vi(s, x) ds+

n∑j=1

ρ∗∗ij

0∫−τ∗

ij(0)

0∫s

Vi(η, x) dη ds

]dx

6 E

∫Ω

n∑i=1

[∣∣ei(0, x)∣∣p + eετ

n∑j=1

ρij

0∫−τij(0)

eεs∣∣ei(s, x)

∣∣p ds

+ eετn∑j=1

ρ∗ij

−τij(0)∫−τ

eεs∣∣ei(s, x)

∣∣p ds+

n∑j=1

τ∗ρ∗∗ij

0∫−τ∗

ij(0)

eεs∣∣ei(s, x)

∣∣p ds

]dx

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 22: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

22 Q. Gan et al.

6 E

∫Ω

n∑i=1

[∣∣ei(0, x)|p + eετ maxi=1,...,n

n∑j=1

ρij

n∑j=1

0∫−τij(0)

eεs∣∣ei(s, x)

∣∣p ds

+ eετ maxi=1,...,n

n∑j=1

ρ∗ij

n∑j=1

−τij(0)∫−τ

eεs∣∣ei(s, x)

∣∣p ds

+ maxi=1,...,n

τ∗

n∑j=1

ρ∗∗ij

n∑j=1

0∫−τ∗

ij(0)

eεs∣∣ei(s, x)

∣∣p ds

]dx

6

[1 + τeετ max

i=1,...,n

n∑j=1

(ρij + ρ∗ij)

+ maxi=1,...,n

(τ∗)2

n∑j=1

ρ∗∗ij

]×E

‖ψ − φ‖pp

(A.16)

and

EV (t, x)

>

∫Ω

n∑i=1

eεt∣∣ei(t, x)

∣∣p dx

= eεtE

∥∥v(t, x)− u(t, x)∥∥pp

. (A.17)

Let

M =

[1 + τeετ max

i=1,...,n

n∑j=1

(ρij + ρ∗ij)

+ maxi=1,...,n

(τ∗)2

n∑j=1

ρ∗∗ij

]1/p

> 1,

µ =1

p

[ε− (T − δ)θ

T

]> 0.

It follows from (A.15)–(A.17) that

E∥∥v(t, x)− u(t, x)

∥∥p

6ME

‖ψ − φ‖p

e−µt,

which implies that the noise-perturbed response system (4) and the drive system (1) canbe exponentially synchronized under the intermittent controller (5) based on p-norm. Thiscompletes the proof of Theorem 3.

References

1. A. Arunkumar, R. Sakthivel, K. Mathiyalagan, S. Marshal Anthoni, Robust stability criteria fordiscrete-time switched neural networks with various activation functions, Appl. Math. Comput.,218:10803–10816, 2012.

2. K. Mathiyalagan, R. Sakthivel, S. Marshal Anthoni, Robust exponential stability and H∞control for switched neutral-type neural networks, Int. J. Adapt. Control Signal Process., 15 pp.,2012, doi: 10.1002/acs.2332.

www.mii.lt/NA

Page 23: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 23

3. R. Sakthivel, K. Mathiyalagan, S. Marshal Anthoni, Delay dependent robust stabilization andH∞ control for neural networks with various activation functions, Phys. Scr., 85, 045801,10 pp., 2012.

4. P. Balasubramaniam, V. Vembarasan, R. Rakkiyappan, Delay-dependent robust exponentialstate estimation of Markovian jumping fuzzy Hopfield neural networks with mixed randomtime-varying delays, Commun. Nonlinear Sci. Numer. Simul., 16(4):2109–2129, 2011.

5. X. Li, J. Shen, LMI approach for stationary oscillation of interval neural networks withdiscrete and distributed time-varying delays under impulsive perturbations, IEEE Trans. NeuralNetworks, 21(10):1555–1563, 2010.

6. V.N. Phat, H. Trinh, Exponential stabilization of neural networks with various activationfunctions and mixed time-varying delays, IEEE Trans. Neural Networks, 21(7):1180–1184,2010.

7. K. Yuan, J. Cao, Robust stability of switched Cohen–Grossberg neural networks with mixedtime-varying delays, IEEE Trans. Neural Networks, 36(6):1356–1363, 2006.

8. S. Chen, J. Cao, Projective synchronization of neural networks with mixed time-varying delaysand parameter mismatch, Nonlinear Dyn., 67(2):1397–1406, 2012.

9. L. Sheng, H. Yang, Exponential synchronization of a class of neural networks with mixedtime-varying delays and impulsive effects, Neurocomputing, 71(16–18):3666–3674, 2008.

10. Q. Song, Design of controller on synchronization of chaotic neural networks with mixed time-varying delays, Neurocomputing, 72(13–15):3288–3295, 2009.

11. Q. Song, J. Cao, Synchronization of nonidentical chaotic neural networks with leakage delayand mixed time-varying delays, Adv. Differ. Equ., 2011, Article ID 16, 17 pp., 2011.

12. C. Zhang, Y. He, M. Wu, Exponential synchronization of neural networks with time-varyingmixed delays and sampled-data, Neurocomputing, 74(1–3):265–273, 2010.

13. X. Liu, T. Chen, Cluster synchronization in directed networks via intermittent pinning control,IEEE Trans. Neural Networks, 22(7):1009–1020, 2011.

14. C. Deissenberg, Optimal control of linear econometric models with intermittent controls, Econ.Plann., 16:49–56, 1980.

15. J. Huang, C. Li, Q. Han, Stabilization of delayed chaotic neural networks by periodicallyintermittent control, Circuits Syst. Signal Process., 28:567–579, 2009.

16. W. Xia, J. Cao, Pinning synchronization of delayed dynamical networks via periodicallyintermittent, Chaos, 19, 013120, 8 pp., 2009.

17. X. Yang, J. Cao, Stochastic synchronization of coupled neural networks with intermittentcontrol, Phys. Lett. A, 373(36):3259–3272, 2009.

18. W. Zhang, J. Huang, P. Wei, Weak synchronization of chaotic neural networks with parametermismatch via periodically intermittent control, Appl. Math. Modelling, 35(2):612–620, 2011.

19. J. Yu, C. Hu, H. Jiang, Z. Teng, Exponential synchronization of Cohen–Grossberg neuralnetworks via periodically intermittent control, Neurocomputing, 74(10):1776–1782, 2011.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25

Page 24: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

24 Q. Gan et al.

20. X. Liu, X. Shen, H. Zhang, Intermittent impulsive synchronization of chaotic delayed neuralnetworks, Differ. Equ. Dyn. Syst., 19(1–2):149–169, 2011.

21. C. Hu, J. Yu, H. Jiang, Z. Teng, Exponential stabilization and synchronization of neuralnetworks with time-varying delays via periodically intermittent control, Nonlinearity, 23:2369–2391, 2010.

22. S. Haykin, Neural Networks, Prentice-Hall, Englewood Cliffs, NJ, 1994.

23. R. Sakthivel, U. Karthik Raja, K. Mathiyalagan, A. Leelamani, Design of a robust controlleron stabilization of stochastic neural networks with time varying delays, Phys. Scr., 85, 035003,10 pp., 2012.

24. R. Sakthivel, K. Mathiyalagan, S. Marshal Anthoni, Robust H∞ control for uncertaindiscrete-time stochastic neural networks with time-varying delays, IET Control Theory Appl.,6(9):1220–1228, 2012.

25. Q. Zhu, J. Cao, Robust exponential stability of Markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks, 21(8):1314–1325, 2010.

26. P. Balasubramaniam, C. Vidhya, Global asymptotic stability of stochastic BAM neuralnetworks with distributed delays and reaction-diffusion terms, J. Comput. Appl. Math.,234(12):3458–3466, 2010.

27. P. Balasubramaniam, C. Vidhya, Exponential stability of stochastic reaction-diffusion uncertainfuzzy neural networks with mixed delays and Markovian jumping parameters, Expert Syst.Appl., 39(3)3109–3115, 2012.

28. Q. Gan, R. Xu, P. Yang, Stability analysis of stochastic fuzzy cellular neural networks withtime-varying delays and reaction-diffusion terms, Neural Process. Lett., 32:45–57, 2010.

29. Q. Gan, R. Xu, P. Yang, Exponential synchronization of stochastic fuzzy cellular neuralnetworks with time delay in the leakage term and reaction-diffusion, Commun. Nonlinear Sci.Numer. Simul., 17(4):1862–1870, 2012.

30. Z. Li, R. Xu, Global asymptotic stability of stochastic reaction-diffusion neural networks withtime delays in the leakage terms, Commun. Nonlinear Sci. Numer. Simul., 17(4):1681–1986,2012.

31. Z. Liu, J. Peng, Delay-independent stability of stochastic reaction-diffusion neural networkswith Dirichlet boundary conditions, Neural Comput. Appl., 19:151–158, 2010.

32. Y. Lv, W. Lv, J. Sun, Convergence dynamics of stochastic reaction-diffusion recurrent neuralnetworks with continuously distributed delays, Nonlinear Anal., Real World Appl., 9(4):1590–1606, 2008.

33. J. Peng, Z. Liu, Stability analysis of stochastic reaction-diffusion delayed neural networks withLevy noise, Neural Comput. Appl., 20:535–541, 2011.

34. L. Wan, Q. Zhou, Exponential stability of stochastic reaction-diffusion Cohen–Grossbergneural networks with delays, Appl. Math. Comput., 206(2):818–826, 2008.

35. X. Xu, J. Zhang, W. Zhang, Mean square exponential stability of stochastic neural networkswith reaction-diffusion terms and delays, Appl. Math. Lett., 24(1):5–11, 2011.

www.mii.lt/NA

Page 25: Exponential synchronization for reaction-diffusion neural ...€¦ · By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, a periodically

Reaction-diffusion neural networks 25

36. X. Liu, Synchronization of linearly coupled neural networks with reaction-diffusion terms andunbounded time delays, Neurocomputing, 73(13–15):2681–2688, 2010.

37. Q. Ma, S. Xu, Y. Zou, G. Shi, Synchronization of stochastic chaotic neural networks withreaction-diffusion terms, Nonlinear Dyn., 67:2183–2196, 2012.

38. L. Wang, W. Ding, Synchronization for delayed non-autonomous reaction-diffusion fuzzycellular neural networks, Commun. Nonlinear Sci. Numer. Simul., 17(1):170–182, 2012.

39. F. Yu, H. Jiang, Global exponential synchronization of fuzzy cellular neural networks withdelays and reaction-diffusion terms, Neurocomputing, 74(4):509–515, 2011.

40. K. Wang, Z. Teng, H. Jiang, Global exponential synchronization in delayed reaction-diffusioncellular neural networks with the Dirichlet boundary conditions, Math. Comput. Modelling,52(1–2):12–24, 2010.

41. W. Lin, Y. He, Complete synchronization of the noise-perturbed Chua’s circuits, Chaos, 15(2),023705, 2005.

42. C. Hu, H. Jiang, Z. Teng, Impulsive control and synchronization for delayed neural networkswith reaction-diffusion terms, IEEE Trans. Neural Networks, 21(1):67–81, 2010.

43. E. Hewitt, K. Stromberg, Real and Abstract Analysis. A Modern Treatment of the Theory ofFunctions of a Real Variable, Springer-Verlag, Berlin, 1969.

44. B. Zhao, F. Deng, Adaptive exponential synchronization of stochastic delay neural networkswith reaction-diffusion, in: W. Yu, H. He, N. Zhang (Eds.), Advances in Neural Networks.Proceedings of the 6th International Symposium on Neural Networks, Wuhan, China, May 26–29, 2009, Part I, Lect. Notes Comput. Sci., Vol. 5551, Springer-Verlag, Berlin, Heidelberg,2009, pp. 550–559.

45. C. Hu, J. Yu, H. Jiang, Z. Teng, Exponential synchronization for reaction-diffusion networkswith mixed delays in terms of p-norm via intermittent driving, Neural Netw., 31:1–11, 2012.

46. J. Qiu, Exponential stability of impulsive neural networks with time-varying delays andreaction-diffusion terms, Neurocomputing, 70:1102–1108, 2007.

47. J. Qiu, J. Cao, Delay-dependent exponential stability for a class of neural networks with timedelays and reaction-diffusion terms, J. Franklin Inst., 346(4):301–314, 2009.

48. Z. Wang, H. Zhang, Global asymptotic stability of reaction-diffusion Cohen–Grossberg neuralnetworks with continuously distributed delays, IEEE Trans. Neural Networks, 21(1):39–49,2010.

49. Z. Wang, H. Zhang, P. Li, An LMI approach to stability analysis of reaction-diffusion Cohen–Grossberg neural networks concerning Dirichlet boundary conditions and distributed delays,IEEE Trans. Syst. Man Cybern., Part B: Cybern., 40(6):1596–1606, 2010.

50. X. Zhang, S. Wu, K. Li, Delay-dependent exponential stability for impulsive Cohen–Grossbergneural networks with time-varying delays and reaction-diffusion terms, Commun. NonlinearSci. Numer. Simul., 16(3):1524–1532, 2011.

Nonlinear Anal. Model. Control, 2014, Vol. 19, No. 1, 1–25