-
Hindawi Publishing CorporationMathematical Problems in
EngineeringVolume 2010, Article ID 403749, 13
pagesdoi:10.1155/2010/403749
Research ArticleA Nonlinear Projection Neural Network forSolving
Interval Quadratic Programming Problemsand Its Stability
Analysis
Huaiqin Wu, Rui Shi, Leijie Qin, Feng Tao, and Lijun He
College of Science, Yanshan University, Qinhuangdao 066001,
China
Correspondence should be addressed to Huaiqin Wu,
[email protected]
Received 26 February 2010; Accepted 19 July 2010
Academic Editor: Jitao Sun
Copyright q 2010 Huaiqin Wu et al. This is an open access
article distributed under the CreativeCommons Attribution License,
which permits unrestricted use, distribution, and reproduction
inany medium, provided the original work is properly cited.
This paper presents a nonlinear projection neural network for
solving interval quadratic programssubject to box-set constraints
in engineering applications. Based on the Saddle point theorem,the
equilibrium point of the proposed neural network is proved to be
equivalent to the optimalsolution of the interval quadratic
optimization problems. By employing Lyapunov functionapproach, the
global exponential stability of the proposed neural network is
analyzed. Twoillustrative examples are provided to show the
feasibility and the efficiency of the proposedmethod in this
paper.
1. Introduction
In lots of engineering applications including regression
analysis, image and signalprogressing, parameter estimation, filter
design and robust control, and so forth �1�, it isnecessary to
solve the following quadratic programming problem:
min12xTQx � cTx,
subject to x ∈ Ω,�1.1�
where Q ∈ Rn×n, c ∈ Rn, and Ω is a convex set. When Q is a
positive definite matrix, theproblem �1.1� is said to be the convex
quadratic program. When Q is a semipositive definitematrix, the
problem �1.1� is said to be the degenerate convex quadratic
program. In general,
-
2 Mathematical Problems in Engineering
the matrixQ is not precisely known, but can only be enclosed in
intervals, that is,Q ≤ Q ≤ Q.Such quadratic program with interval
data is named as interval quadratic program usually.In the recent
years, there have been some project neural network approaches for
solving theproblem �1.1�; see, for example, �2–15�, and the
references therein. In �2�, Kennedy and Chuapresented a primal
network for solving the convex quadratic program. This network
containsa finite penalty parameter, so it converges an approximate
solution only. To overcome thepenalty parameter, in �3, 4�, Xia
proposed several primal projection neural networks forsolving the
convex quadratic program and it dual, and analyzed the global
asymptoticstability of the proposed neural networks when the
constraint set Ω is a box set. In �5, 6�,Xia et al. presented a
recurrent projection neural network for solving the convex
quadraticprogram and related linear piecewise equation, and gave
some conditions of the exponentialconvergence. In �7, 8�, Yang and
Cao presented a delayed projection neural network forsolving
problem �1.1�, and analyzed the global asymptotic stability and
exponential stabilityof the proposed neural networks when the
constraint set Ω is a unbounded box set.In order to solve the
degenerate convex quadratic program, Tao et al. �9� and Xue andBian
�10, 11� proposed two projection neural networks, and proved that
the equilibriumpoint of the proposed neural networks was equivalent
to the KT point of the quadraticprogramming problem. Particularly,
in �10�, the proposed neural network was shown tohave complete
convergence and finite-time convergence, and the nonsingular part
of theoutput trajectory with respect to Q has an exponentially
convergent rate. In �12, 13�, Hu andWang designed a general
projection neural network for solving monotone linear
variationalinequalities and extended linear-quadratic programming
problems, and proved that theproposed network was exponentially
convergent when the constraint set Ω is a polyhedralset.
In order to solve the interval quadratic program, in �14�, Ding
and Huang presenteda new class of interval projection neural
networks, and proved the equilibrium point ofthis neural networks
is equivalent to the KT point of a class of interval quadratic
program.Furthermore, some sufficient conditions to ensure the
existence and global exponentialstability for the unique
equilibrium point of interval projection neural networks are
given.To the best of the authors knowledge, the work in �14� is
first to study solving theinterval quadratic program by a
projection neural network. However, the interval quadraticprogram
discussed in �14� is only a quadratic program without constraints,
thus has manylimitations in practice. It is well known that the
quadratic program with constraints is morepopular.
Motivated by the above discussion, in the present paper, a new
projectionneural network for solving the interval quadratic
programming problem with box-setconstraints is presented. Based on
the Saddle theorem, the equilibrium point of theproposed neural
network is proved to be equivalent to the KT point of the
intervalquadratic program. By using the fixed point theorem, the
existence and uniqueness ofan equilibrium point of the proposed
neural network are analyzed. By constructing asuitable Lyapunov
function, a sufficient condition to ensure the existence and
globalexponential stability for the unique equilibrium point of
interval projection neural network isobtained.
This paper is organized as follows. Section 2 describes the
system model and givessome necessary preliminaries; Section 3 gives
the proof of the existence of equilibriumpoint of the proposed
neural network, and discusses the global exponential stability of
theproposed neural network; Section 4 provides two numerical
examples to demonstrate thevalidity of the obtained results. Some
conclusions are drawn in Section 5.
-
Mathematical Problems in Engineering 3
2. A Projection Neural Network Model
Consider the following interval quadratic programming
problem:
min12xTQx � cTx
subject to g ≤ Dx ≤ h,
Q ≤ Q ≤ Q,
�2.1�
where Q �qij�n×n, Q �qij�n×n, Q �qij�n×n ∈ Rn×n; c, g, h ∈ Rn,
and D diag�d1, d2, . . . , dn�is a positive definite diagonal
matrix. Q ≤ Q ≤ Q means qij ≤ qij ≤ qij , i, j 1, . . . , n.
TheLagrangian function of the problem �2.1� is
L(x, u, η
)
12xTQx � cTx − uT
(Dx − η
), Q ≤ Q ≤ Q, �2.2�
where u ∈ Rn is referred to as the Lagrange multiplier and η ∈ X
{u ∈ Rn | g ≤ u ≤ h}.Based on the well-known Saddle point theorem
�1�, x∗ is an optimal solution of �2.1� if andonly if there exist
u∗ and η∗, satisfying L�x∗, u, η∗� ≤ L�x∗, u∗, η∗� ≤ L�x, u∗, η�,
that is,
12x∗TQx∗ � cTx∗ − uT
(Dx∗ − η∗
)
≤ 12x∗TQx∗ � cTx∗ − u∗T
(Dx∗ − η∗
)
≤ 12xTQx � cTx − u∗T
(Dx − η
).
�2.3�
By the first inequality in �2.3�, �u − u∗�T �Dx∗ − η∗� ≥ 0, for
all u ∈ Rn, hence Dx∗ η∗. Letf�x� �1/2�xTQx�cTx−u∗TDx. By the
second inequality in �2.3�, f�x∗�−f�x� ≤ u∗T �η−η∗�,for all x ∈ Rn,
η ∈ X. If there exists x ∈ Rn such that f�x∗� − f�x� > 0, then 0
< u∗T �η − η∗�,for all η ∈ X, which is contradictive when η η∗.
Thus, for any x ∈ Rn, it follows thatf�x∗� − f�x� ≤ 0 and u∗T �η −
η∗� ≥ 0, for all η ∈ X.
By using the project formulation �16�, the above inequality can
be equivalentlyrepresented as η∗ PX�η∗ − u∗�, where PX�u� �PX�u1�,
PX�u2�, . . . , PX�un��T is a projectfunction, and, for i 1, 2, . .
. , n,
PX�ui�
⎧⎪⎪⎪⎨
⎪⎪⎪⎩
gi, ui < gi,
ui, gi ≤ ui ≤ hi,
hi, ui > hi.
�2.4�
On the other hand, f�x∗� ≤ f�x�, for all x ∈ Rn. This implies
that
∇f�x∗� Qx∗ � c −Du∗ 0. �2.5�
-
4 Mathematical Problems in Engineering
Thus, x∗ is an optimal solution of �2.1� if and only if there
exist u∗ and η∗, such that �x∗, u∗, η∗�satisfies
Dx η,
Qx � c −Du 0,
η PX(η − u
).
�2.6�
From �2.6�, it follows that Dx PX�Dx − u�. Hence, x∗ is an
optimal solution of �2.1� if andonly if there exists u∗ such that
�x∗, u∗� satisfies
Qx � c −Du 0,
Dx PX�Dx − u�.�2.7�
Substituting u D−1�Qx � c� into the equation Dx PX�Dx − u�, we
have
Dx PX(Dx −D−1Qx −D−1c
), �2.8�
where
D−1
⎛
⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1d1
0 · · · 0
01d2
· · · 0...
.... . .
...
0 0 · · · 1dn
⎞
⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
, D−1Q
⎛
⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝
1d1
q111d1
q12 · · ·1d1
q1n
1d2
q211d2
q22 · · ·1d2
q2n
......
. . ....
1dn
qn11dn
qn2 · · ·1dn
qnn
⎞
⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠
. �2.9�
By the above discussion, we can obtain the following
proposition.
Proposition 2.1. Let x∗ be a solution of the project
equation
Dx PX(Dx −D−1Qx −D−1c
), �2.10�
then, x∗ is an optimal solution of the problem �2.1�.
-
Mathematical Problems in Engineering 5
d11
d1n
dn1
dnn
m11
m1n
mnn
mn1
...
...
...
...
...
...
++
++
++
+++
+
−
−
−
−PX
PX
∑
∑
∑ ∑
∑
∑
∫
∫
C1
Cn
x1
xn
Figure 1: Architecture of the proposed neural network in
�2.11�.
In the following, we propose a neural network, which is said to
be the intervalprojection neural network, for solving �2.1� and
�2.10�, whose dynamical equation is definedas follows:
dx�t�dt
PX(Dx −D−1Qx −D−1c
)−Dx, t > t0,
x�t0� x0,
Q ≤ Q ≤ Q.
�2.11�
The neural networks �2.11� can be equivalently written as
dxi�t�dt
PX
⎛
⎝dixi −1di
n∑
j1
qijxj −cidi
⎞
⎠ − dixi, t > t0,
xi�t0� xi0,
qij ≤ qij ≤ qij , i 1, . . . , n, j 1, . . . , n.
�2.12�
Figure 1 shows the architecture of the neural network �2.11�,
whereM �mij�n×n D−D−1Q,C D−1c, and D �dij�n×n.
-
6 Mathematical Problems in Engineering
Definition 2.2. The point x∗ is said to be an equilibrium point
of interval projection neuralnetwork �2.11�, if x∗ satisfies
0 PX(Dx∗ −D−1Qx∗ −D−1c
)−Dx∗. �2.13�
By Proposition 2.1 and Definition 2.2, we have the following
theorem.
Theorem 2.3. The point x∗ is an equilibrium point of the
interval projection neural network �2.11� ifand only if it is an
optimal solution of the interval quadratic program �2.1�.
Definition 2.4. The equilibrium point x∗ of the neural network
�2.11� is said to be globallyexponentially stable, if the
trajectory x�t� of the neural network �2.11� with the initial
valuex0 satisfies
‖x�t� − x∗‖ ≤ c0 exp(−β�t − t0�
), ∀t ≥ t0, �2.14�
where β > 0 is a constant independent of the initial value x0
and c0 > 0 is a constant dependenton the initial value x0. ‖ · ‖
denotes the 1-norm of Rn, that is, ‖x‖
∑ni1 |xi|.
Lemma 2.5 �see �17��. Let Ω ⊂ Rn be a closed convex set.
Then,
�v − PΩ�v��T �PΩ�v� − u� ≥ 0, ∀u ∈ Ω, v ∈ Rn,
‖PΩ�u� − PΩ�v�‖ ≤ ‖u − v‖, ∀u, v ∈ Rn,�2.15�
where PΩ�u� is a project function on Ω, given by PΩ�u� arg
miny∈Ω‖u − y‖.
3. Stability Analysis
In order to obtain the results in this paper, we make the
following assumption for the neuralnetwork �2.11�:
H1: qii ≤ d2i , d2i −
did∗
< qii −n∑
j1,j / i
q∗ji < d2i , i 1, 2, . . . , n, �3.1�
where d∗ max1≤i≤n1/di, q∗ji max{|qji|, |qji|}.
Theorem 3.1. If the assumption H1 is satisfied, then there
exists a unique equilibrium point for theneural network �2.11�.
Proof. Let T�x� D−1PX�Dx − D−1Qx − D−1c�, x ∈ Rn. By Definition
2.2, it is obvious thatthe neural network �2.11� has a unique
equilibrium point if and only if T has a unique fixedpoint in Rn.
In the following, by using fixed point theorem, we prove that T has
a unique
-
Mathematical Problems in Engineering 7
fixed point in Rn. For any x, y ∈ Rn, by Lemma 2.5 and the
assumption H1, we can obtainthat
∥∥T�x� − T
(y)∥∥
∥∥∥D−1PX
(Dx −D−1Qx −D−1c
)−D−1PX
(Dy −D−1Qy −D−1c
)∥∥∥
≤∥∥∥D−1
∥∥∥∥∥∥PX
(Dx −D−1Qx −D−1c
)− PX
(Dy −D−1Qy −D−1c
)∥∥∥
≤∥∥∥D−1
∥∥∥∥∥∥(Dx −D−1Qx −D−1c
)−(Dy −D−1Qy −D−1c
)∥∥∥
∥∥∥D−1
∥∥∥∥∥∥(D −D−1Q
)(x − y
)∥∥∥
≤∥∥∥D−1
∥∥∥∥∥∥D −D−1Q
∥∥∥∥∥x − y
∥∥
max1≤i≤n
1di
·max1≤i≤n
⎛
⎝di −1diqii �
1di
n∑
j1,j / i
∣∣qji∣∣
⎞
⎠ ·∥∥x − y
∥∥
≤ d∗ max1≤i≤n
⎛
⎝di −1diqii �
1di
n∑
j1,j / i
q∗ji
⎞
⎠∥∥x − y
∥∥
max1≤i≤n
�i∥∥x − y
∥∥,
�3.2�
where �i d∗�di − �1/di�qii � �1/di�∑n
j1,j / i q∗ji�. By the assumptionH1, 0 < d
∗�di − �1/di�qii ��1/di�
∑nj1,j / i q
∗ji� < 1, i 1, 2, . . . , n. This implies that 0 <
max1≤i≤n�i < 1. Equation �3.2�
shows that T is a contractive mapping, and hence T has a unique
fixed point. This completesthe proof.
Proposition 3.2. If the assumption H1 holds, then for any x0 ∈
Rn, there exists a solution with theinitial value x�0� x0 for the
neural network �2.11�.
Proof. Let F D�T − I�, where I is an identity mapping, then F�x�
PX�Dx − D−1Qx −D−1c� −Dx. By �3.2�, we have
∥∥F�x� − F(y)∥∥
∥∥D�T − I��x� −D�T − I�(y)∥∥
≤ max1≤i≤n
di
(1 �max
1≤i≤n�i
)∥∥x − y∥∥, ∀x, y ∈ Rn.
�3.3�
Equation �3.3� means that the mapping F is globally Lipschitz.
Hence, for any x0 ∈ Rn, thereexists a solution with the initial
value x�0� x0 for the neural network �2.11�. This completesthe
proof.
Proposition 3.2 shows the existence of the solution for the
neural network �2.11�.
-
8 Mathematical Problems in Engineering
Theorem 3.3. If the assumption H1 is satisfied, then the
equilibrium point of the neural network�2.11� is globally
exponentially stable.
Proof. By Theorem 3.1, the neural network �2.11� has a unique
equilibrium point. We denotethe equilibrium point of the neural
network �2.11� by x∗.
Consider Lyapunov function V�t� ‖x�t� − x∗‖ ∑n
i1 |xi�t� − x∗i |. Calculate thederivative of V�t� along the
solution x�t� of the neural network �2.11�. When t > t0, we
have
dV�t�dt
n∑
i1
xi�t� − x∗i∣∣xi�t� − x∗i
∣∣d(xi�t� − x∗i
)
dt
n∑
i1
xi�t� − x∗i∣∣xi�t� − x∗i
∣∣
⎛
⎝PX
⎛
⎝dixi −1di
n∑
j1
qijxj −cidi
⎞
⎠ − dixi
⎞
⎠
n∑
i1
xi�t� − x∗i∣∣xi�t� − x∗i∣∣
⎛
⎝PX
⎛
⎝dixi −1di
n∑
j1
qijxj −cidi
⎞
⎠ − dix∗i � dix∗i − dixi
⎞
⎠
−n∑
i1
di∣∣xi�t� − x∗i
∣∣ �n∑
i1
xi�t� − x∗i∣∣xi�t� − x∗i∣∣
⎛
⎝PX
⎛
⎝dixi −1di
n∑
j1
qijxj −cidi
⎞
⎠ − dix∗i
⎞
⎠
≤(−min1≤i≤n
di
) n∑
i1
∣∣xi�t� − x∗i∣∣ �
n∑
i1
∣∣∣∣∣∣PX
⎛
⎝dixi −1di
n∑
j1
qijxj −cidi
⎞
⎠ − dix∗i
∣∣∣∣∣∣
(−min1≤i≤n
di
)‖x − x∗‖ �
∥∥∥PX(Dx −D−1Qx −D−1c
)−Dx∗
∥∥∥.
�3.4�
Noting Dx∗ PX�Dx∗ −D−1Qx∗ −D−1c�, by Lemma 2.5, we have
∥∥∥PX(Dx −D−1Qx −D−1c
)−Dx∗
∥∥∥
∥∥∥PX
(Dx −D−1Qx −D−1c
)− PX
(Dx∗ −D−1Qx∗ −D−1c
)∥∥∥
≤∥∥∥(Dx −D−1Qx −D−1c
)−(Dx∗ −D−1Qx∗ −D−1c
)∥∥∥
≤∥∥∥(D −D−1Q
)�x − x∗�
∥∥∥
≤∥∥∥D −D−1Q
∥∥∥‖x − x∗‖.
�3.5�
-
Mathematical Problems in Engineering 9
Hence,
dV�t�dt
≤
⎛
⎝−min1≤i≤n
di �max1≤i≤n
⎛
⎝di −1di
∣∣qii
∣∣ �
1di
n∑
j1,j / i
∣∣qji
∣∣
⎞
⎠
⎞
⎠ · ‖x − x∗‖
≤
⎛
⎝− 1d∗
�max1≤i≤n
⎛
⎝di −1diqii �
1di
n∑
j1,j / i
q∗ji
⎞
⎠
⎞
⎠ · ‖x − x∗‖
max1≤i≤n
�′i‖x − x∗‖,
�3.6�
where �′i di − �1/di�qii � �1/di�∑n
j1,j / i q∗ji − �1/d∗�. By the assumption H1, �
′i < 0. Hence,
max1≤i≤n�′i < 0. Let �∗ min1≤i≤n|�′i|, then �∗ > 0.
Equation �3.6� can be rewritten as dV�t�/dt ≤
−�∗‖x − x∗‖. It follows easily that ‖x�t� − x∗‖ ≤ ‖x0 − x∗‖
exp�−�∗�t − t0��, for all t > t0. Thisshows that the equilibrium
point x∗ of the neural network �2.11� is globally
exponentiallystable. This completes the proof.
4. Illustrative Examples
Example 4.1. Consider the interval quadratic program defined by
D diag�2, 1�, g �2, 2�T ,h �3, 2�T , cT �−1,−1�, Q
(3 0.10.1 0.7
), Q
(3.1 0.20.2 0.8
), and Q∗ �q∗ji�
(3.1 0.20.2 0.8
).
The optimal solution of this quadratic program is �1, 2� under Q
Q or Q Q. It iseasy to check that
3.0 < q11 < 3.1, q11 < d21 4,
0.7 < q22 < 0.8, q22 < d22 1,
d21 −d1d∗
2 < q11 − q∗21 3.0 − 0.2 < d21 4,
d22 −d2d∗
0 < q22 − q∗12 0.7 − 0.2 < d22 1.
�4.1�
The assumption H1 holds. By Theorems 3.1 and 3.3, the neural
network �2.11� has a uniqueequilibrium point which is globally
exponentially stable, and the unique equilibrium point�1, 2� is the
optimal solution of this quadratic programming problem.
In the case of Q Q, Figure 2 reveals that the projection neural
network �2.11�with random initial value �2.5,−0.5� has a unique
equilibrium point �1, 2� which is globallyexponentially stable. In
the case ofQ Q, Figure 3 reveals that the projection neural
network�2.11� with random initial value �−2.5, 3� has the same
unique equilibrium point �1, 2� whichis globally exponentially
stable. These are in accordance with the conclusion of Theorems
3.1and 3.3.
-
10 Mathematical Problems in Engineering
0 5 10 15 20−0.5
0
0.5
1
1.5
2
2.5
t
x1x2
x(t)
Figure 2: Convergence of the state trajectory of the neural
network with random initial value �2.5,−0.5�;Q Q in this
example.
0 5 10 15 20−3
−2
−1
0
1
2
3
t
x1x2
x(t)
Figure 3: Convergence of the state trajectory of the neural
network with random initial value �−2.5,3�;Q Q in this example.
Example 4.2. Consider the interval quadratic program defined by
D diag�1, 2, 2�, g
�1, 2, 3�T , h �2, 3, 4�T , cT �1, 1, 1�, Q ( 0.8 0.2 0.3
0.2 3 0.10.3 0.1 3.5
), Q
( 0.9 0.3 0.40.3 3.1 0.20.4 0.2 3.6
), and Q∗ �q∗ji� ( 0.9 0.3 0.4
0.3 3.1 0.20.4 0.2 3.6
).
-
Mathematical Problems in Engineering 11
0 5 10 15 20−1
−0.5
0
0.5
1
1.5
2
t
x1x2x3
x(t)
Figure 4: Convergence of the state trajectory of the neural
network with random initial value�−0.5,0.6,−0.8�; Q Q in this
example.
The optimal solution of this quadratic program is �1, 1, 1.5�
under Q Q or Q Q. Itis easy to check that
0.8 < q11 < 0.9, q11 < d21 1,
3.0 < q22 < 3.1, q22 < d22 4,
3.5 < q33 < 3.6, q33 < d23 4,
d21 −d1d∗
0 < q11 −(q∗21 � q
∗31
) 0.8 − �0.3 � 0.4� < d21 1,
d22 −d2d∗
2 < q22 −(q∗12 � q
∗32) 3.0 − �0.3 � 0.2� < d22 4,
d23 −d3d∗
0 < q33 −(q∗13 � q
∗23) 3.5 − �0.4 � 0.2� < d23 4.
�4.2�
The assumption H1 holds. By Theorems 3.1 and 3.3, the neural
network �2.11� has a uniqueequilibrium point which is globally
exponentially stable, and the unique equilibrium point�1, 1, 1.5�
is the optimal solution of this quadratic programming problem.
In the case of Q Q, Figure 4 reveals that the projection neural
network �2.11�with random initial value �−0.5, 0.6,−0.8� has a
unique equilibrium point �1, 1, 1.5� which isglobally exponentially
stable. In the case ofQ Q, Figure 5 reveals that the projection
neuralnetwork �2.11� with random initial value �0.8,−0.6, 0.3� has
the same unique equilibriumpoint �1, 1, 1.5� which is globally
exponentially stable. These are in accordance with theconclusion of
Theorems 3.1 and 3.3.
-
12 Mathematical Problems in Engineering
0 5 10 15 20−1
−0.5
0
0.5
1
1.5
2
t
x1x2x3
x(t)
Figure 5: Convergence of the state trajectory of the neural
network with random initial value �0.8,−0.6,0.3�;Q Q in this
example.
5. Conclusion
In this paper, we have developed a new projection neural network
for solving intervalquadratic programs, the equilibrium point of
the proposed neural network is equivalentto the solution of
interval quadratic programs. A condition is derived which ensures
theexistence, uniqueness, and global exponential stability of the
equilibrium point. The resultsobtained are highly valuable in both
theory and practice for solving interval quadraticprograms in
engineering.
Acknowledgment
This paper was supported by the Hebei Province Education
Foundation of China �2009157�.
References
�1� M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear
Programming: Theory and Algorithm, JohnWiley & Sons, Hoboken,
NJ, USA, 2nd edition, 1993.
�2� M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear
programming,” IEEE Transactions onCircuits and Systems, vol. 35,
no. 5, pp. 554–562, 1988.
�3� Y. Xia, “A new neural network for solving linear and
quadratic programming problems,” IEEETransactions on Neural
Networks, vol. 7, no. 6, pp. 1544–1547, 1996.
�4� Y. Xia and J. Wang, “A general projection neural network for
solving monotone variationalinequalities and related optimization
problems,” IEEE Transactions on Neural Networks, vol. 15, no.2, pp.
318–328, 2004.
�5� Y. Xia, G. Feng, and J. Wang, “A recurrent neural network
with exponential convergence for solvingconvex quadratic program
and related linear piecewise equations,”Neural Networks, vol. 17,
no. 7, pp.1003–1015, 2004.
-
Mathematical Problems in Engineering 13
�6� Y. Xia and G. Feng, “An improved neural network for convex
quadratic optimization with applicationto real-time beamforming,”
Neurocomputing, vol. 64, no. 1–4, pp. 359–374, 2005.
�7� Y. Yang and J. Cao, “Solving quadratic programming problems
by delayed projection neuralnetwork,” IEEE Transactions on Neural
Networks, vol. 17, no. 6, pp. 1630–1634, 2006.
�8� Y. Yang and J. Cao, “A delayed neural network method for
solving convex optimization problems,”International Journal of
Neural Systems, vol. 16, no. 4, pp. 295–303, 2006.
�9� Q. Tao, J. Cao, and D. Sun, “A simple and high performance
neural network for quadraticprogramming problems,” Applied
Mathematics and Computation, vol. 124, no. 2, pp. 251–260,
2001.
�10� X. Xue and W. Bian, “A project neural network for solving
degenerate convex quadratic program,”Neurocomputing, vol. 70, no.
13–15, pp. 2449–2459, 2007.
�11� X. Xue and W. Bian, “A project neural network for solving
degenerate quadratic minimax problemwith linear constraints,”
Neurocomputing, vol. 72, no. 7–9, pp. 1826–1838, 2009.
�12� X. Hu, “Applications of the general projection neural
network in solving extended linear-quadraticprogramming problems
with linear constraints,” Neurocomputing, vol. 72, no. 4–6, pp.
1131–1137,2009.
�13� X. Hu and J. Wang, “Design of general projection neural
networks for solving monotone linearvariational inequalities and
linear and quadratic optimization problems,” IEEE Transactions
onSystems, Man, and Cybernetics, Part B, vol. 37, no. 5, pp.
1414–1421, 2007.
�14� K. Ding and N.-J. Huang, “A new class of interval
projection neural networks for solving intervalquadratic program,”
Chaos, Solitons and Fractals, vol. 35, no. 4, pp. 718–725,
2008.
�15� Y. Yang and J. Cao, “A feedback neural network for solving
convex constraint optimizationproblems,” Applied Mathematics and
Computation, vol. 201, no. 1-2, pp. 340–350, 2008.
�16� D. Kinderlehrer and G. Stampacchia, An Introduction to
Variational Inequalities and Their Applications,vol. 88 of Pure and
Applied Mathematics, Academic Press, New York, NY, USA, 1980.
�17� C. Baiocchi and A. Capelo, Variational and Quasivariational
Inequalities: Applications to Free BoundaryProblems, A
Wiley-Interscience Publication, John Wiley & Sons, New York,
NY, USA, 1984.
-
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Probability and StatisticsHindawi Publishing
Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
CombinatoricsHindawi Publishing
Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing
Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical
Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
The Scientific World JournalHindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume
2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014
Stochastic AnalysisInternational Journal of