Differential and Integral Equations, Volume 5, Number 6, November 1992, pp. 1307-1334. PONTRYAGIN MAXIMUM PRINCIPLE FOR SEMILINEAR SECOND ORDER ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS AND VARIATIONAL INEQUALITIES WITH STATE CONSTRAINTS JIONGMIN YONG Department of Mathematics, Fudan University, Shanghai 200433, China (Submitted by: V. Barbu) Abstract. We present a method for deriving first order necessary conditions of optimal controls for semilinear second order elliptic partial differential equations and variational inequalities with various kinds of state constraints. The main tools used are the Ekeland variational principle and the spike variation technique for elliptic equations which is a com- bination of the ideas from vector valued measure theory and the representation of solutions for second order partial differential equations via Green's functions. 1. Introduction. The purpose of this paper is to present a method for deriving a Pontryagin type maximum principle as a first order necessary condition of optimal controls for problems governed by semilinear elliptic partial differential equations and variational inequalities. We allow various kinds of constraints to be imposed on the state. To be more precise, let us take the case of semilinear variational inequalities as an example. Thus, we have the system { + f3(y(x)) 3 f(x, y(x), u(x)) Y lan- O, in D'(O), (1.1) where 0 is a bounded region in IRn with a Lipschitz boundary 80, A is the second order elliptic partial differential operator n Ay(x) =- L Ox,(a;j(x)axjY(x)), (1.2) i,j=l (3 c lR X lR is a maximal monotone graph with 0 E Dom ((3)' f: n X lR X u --+ lR is a given map and U is a metric space in which the control variable u(-) takes values. Under certain conditions, which will be specified later, for each u(-) E U = { u: 0 --+ U I u( ·) measurable}, there exists a y( ·) in some function space Y satisfying ( 1.1) in a suitable sense. We refer to such a y(·) as a state associated with the control u( ·). Then, we can talk about the state constraint y(·) E Q, Received for publication October 1991. AMS Subject Classification: 35J85, 49B22. (1.3) An International Journal for Theory & Applications 1307
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Differential and Integral Equations, Volume 5, Number 6, November 1992, pp. 1307-1334.
PONTRYAGIN MAXIMUM PRINCIPLE FOR
SEMILINEAR SECOND ORDER ELLIPTIC PARTIAL
DIFFERENTIAL EQUATIONS AND VARIATIONAL
INEQUALITIES WITH STATE CONSTRAINTS
JIONGMIN YONG
Department of Mathematics, Fudan University, Shanghai 200433, China
(Submitted by: V. Barbu)
Abstract. We present a method for deriving first order necessary conditions of optimal controls for semilinear second order elliptic partial differential equations and variational inequalities with various kinds of state constraints. The main tools used are the Ekeland variational principle and the spike variation technique for elliptic equations which is a combination of the ideas from vector valued measure theory and the representation of solutions for second order partial differential equations via Green's functions.
1. Introduction. The purpose of this paper is to present a method for deriving a Pontryagin type maximum principle as a first order necessary condition of optimal controls for problems governed by semilinear elliptic partial differential equations and variational inequalities. We allow various kinds of constraints to be imposed on the state. To be more precise, let us take the case of semilinear variational inequalities as an example. Thus, we have the system
{ Ay(x~ + f3(y(x)) 3 f(x, y(x), u(x))
Y lan- O,
in D'(O), (1.1)
where 0 is a bounded region in IRn with a Lipschitz boundary 80, A is the second order elliptic partial differential operator
n
Ay(x) =- L Ox,(a;j(x)axjY(x)), (1.2) i,j=l
(3 c lR X lR is a maximal monotone graph with 0 E Dom ((3)' f: n X lR X u --+ lR is a given map and U is a metric space in which the control variable u(-) takes values. Under certain conditions, which will be specified later, for each u(-) E U = { u: 0 --+
U I u( ·) measurable}, there exists a y( ·) in some function space Y satisfying ( 1.1) in a suitable sense. We refer to such a y(·) as a state associated with the control u( ·). Then, we can talk about the state constraint
y(·) E Q,
Received for publication October 1991. AMS Subject Classification: 35J85, 49B22.
(1.3)
An International Journal for Theory & Applications
1307
1308 JIONGMIN YONG
where Q is a given subset of Y. We see that if, say, Y = C(O) and Q is given by, say,
Q = {z(·) E C(O): z(x;) =a;, 1::; i::; m}, (1.4)
where x; E 0, a; E IR, 1 ::; i ::; m, then (1.3) actually gives some pointwise state contraints to the state y( ·). We will see that Q can be very different and thus our problem will cover many interesting problems. Next, we let Aq be the set of all pairs (y(·), u(·)) E Y xU with (1.1) and (1.3) being satisfied and with
(1.5)
where f 0 : 0 x IR x U --+ IR is another given map. We refer to any pair (y(-), u( ·)) E Aq as an admissible pair and refer to y(·) and u(·) as an admissible state and control, respectively. For any (y(·),u(·)) E Aq, we can define
J(y(·), u(·)) =in j 0 (x, y(x), u(x)) dx. (1.6)
This is called the cost functional. Our optimal control problem for semilinear variational inequalities (Problem SV for short) can be stated as follows.
Problem SV. Find (y(·), u(·)) E Aq such that
J(y(·), u(-)) = inf J(y(·), u(·)). AQ
(1. 7)
If such a pair (y(·), u(·)) exists, we refer to it as an optimal pair and refer toy(·) and u( ·) as an optimal state and control, respectively.
Let us list some special cases of Problem SV which were studied by other authors.
1) f(x,y,u) = f(x) + Bu, Q = Y and f 0 (x,y,u) = g(y) + h(u) with h(u) being convex and lower semi-continuous ([3]).
2) (3 = 0, f(x, y, u) = f(x, u) and Q = Y ([24, 35]).
5) Q = y ([7]). We notice that in the above, Q = Y means that there is no state constraints. We see that if (3 = 0, the problem is reduced to the one which has semilinear elliptic equations with distributed controls and state constraints, which is referred as Problem SE. In [37], some results on the existence of optimal pairs of the above Problem SV were established under certain convexity conditions. In this paper, we will establish a Pontraygin type maximum principle as a first order necessary condition of optimal pairs for the above Problem SV as well as Problem SE. We will restrict ourselves to the case n ~ 2. The case n = 1 is of ordinary differential equation nature and the corresponding discussion is much easier.
This paper is organized as follows. In §2, we study Problem SE in detail. This section contains half of the main ideas of this paper. Ekeland's variational principle is used to treat the state constraint. As in [16, 25, 26], some sort of finite codimensionality condition is imposed to ensure the non triviality of the multiplier. On the other hand, since there is no convexity condition being assumed on U (actually,
PONTRYAGIN MAXIMUM PRINCIPLE 1309
it is only a metric space), to obtain the variation of the state resulting from the spike variation of the control, we adopt some results from vector measure theory [14] and the representation of solutions by second order elliptic partial differential equations via Green's functions [29]. This method is referred to as the spike variation technique for elliptic equations. A similar idea was used in [24, 25]; see [16, 26, 36] also. §3 is devoted to Problem SV. Here, the other half of the ideas of this paper is shown. Since the state equation is of variational inequality type, we need to regularize it first. But, it is noticeable that the state constraint is presented at the same time. Thus, to make the Ekeland variational principle applicable to the present case, we impose a stability condition for the optimal value of the cost with respect to the variation of the state constraints. Such an idea was used by the author in [36]. It turns out that although this condition restricts the generality of our problem, it is satisfied by many interesting situations.
It seems to us that our approach is applicable to problems for boundary controls, quasilinear equations or variational inequalities. Also, evolutionary problems might be treated similarly. Some of these nontrivial extensions will be studied in our future publications.
There is a huge body of literature on optimal control problems. Among them, we could only list very few which are closely related to this work. For classical results on the Pontryagin Maximum Principle in finite dimensions, we refer the reader to [4, 12, 17, 30]. For the infinte dimensional version of it, see [6, 16, 24-26, 35,36]. Some other results on optimal controls of partial differential equations and variational inequalities can be found in [3, 5, 7, 9, 10, 18-20, 27, 28, 33]. Also, all the references cited in the above works should be mentioned.
2. Optiii1al control of semilinear elliptic equations. In this section, we study Problem SE; namely, the optimal control problem with the state governed by a semilinear second order elliptic partial differential equation with a distributed control. Thus, our system reads
{ Ay(x~ = f(x, y(x), u(x))
Ylan- 0,
in V'(rl), (2.1)
with A being given by (1.2). Throughout this section, we make the following assumptions.
(E1) r2 C lRn is a bounded region with a C 1 ,>. boundary 8r2 for some>. E (0, 1]; U is a metric space.
(E2) The operator A is defined by (1.2) with aij(·) E C(O), 1::; i, j::; n, and for some a> 0,
n n
i,j=l i=l
The assumptions on f(·, ·, ·) will be given a little later.
2.1. Some results for linear equations. This subsection is devoted to giving some preliminary results on linear equations which will be useful in the sequel. We
1310 JIONGMIN YONG
consider the following boundary value problem for a linear equation of divergence form:
{ Ay ~ ~o+ t,a.J; in D'(f!),
Ylan- 0,
(2.3)
where n n
Ay(x) =- L 8x;(a;j(x)8x;Y(x)) + L ax;(b;(x)y(x)) i,j=l i=l
n (2.4)
+ L c;(x)8x;Y(x) + d(x)y(x), i=l
with a;j(·) satisfying (E2), b;(-), c;(·), d(·) E £ 00 (0),
n
L ax;b;(x) + d(x) ~ 0 in 'D'(O), (2.5) i=l
n
-L ax;c;(x) + d(x) ~ 0 in 'D'(O), (2.6) i=l
and J0 , J; E £P(O) with p E [1, oo ). The following result has its own interest.
Lemma 2.1. For any p E (1, oo), problem (2.3) admits a unique solution y(·) E W~'P(O) satisfying
n
11YIIwo1·P(n) ~ c(ll!ollu(n) + L llfiiiLP(!l) ), (2.7) i=l
with C being independent of fo and the J; 's. Furthermore, for the case J; = 0, 1 ~ i ~ n, and 1 < p < n,
(2.8)
with p* = nn!P. For the case p = 1 with J; = 0 (1 ~ i ~ n) (2.3) admits a unique
solution y(·) E W~'q(O) with 1 ~ q < n/(n - 1) and (2.8) holds with p* being replaced by q E [1, n~l ).
Remark 2.2. The above result is not new. But we could not find a reference which explictly stated and proved such a result. For the reader's convenience, we present a proof of it below. For the case p > n, it is standard that, by only assuming a;j(-) E L00 (0), (2.3) admits a unique solution y(·) E W~'2 (0) n L00 (0) which is Holder continuous (21-23, 29, 34]. The case p = 1 was studied in (32] (see (8, 31] also). The main point of the above lemma is that for any J0 , J; E £P(0) with any p E (1,oo) (not just with p > n), we have a unique solution y(·) E W~'P(O) satisfying estimate (2.7). This is useful in asserting the existence and uniqueness of the solutions of adjoint equations later.
Proof of Lemma 2.1. First of all, by (2], we know that if y(·) E W~'P(O) is a solution of (2.3), then (2.7) holds with Conly depending on the L 00-norms of a;j(·),
PONTRYAGIN MAXIMUM PRINCIPLE 1311
b;(-), c;(·), d(·), the ellipticity constant a, the constant p > 1 and the modulus of continuity for a;j(·). Now, we approximate all the data by the smooth functions
a7j(·)--> a;j(·) in C(O), k--> oo,
b7(x)--> b;(x), c7(x)--> c;(x), dk(x)--> d(x), a.e. x E 0, k--> oo,
llb7(·),c7(-),dk(·)IIL~cn):::; C, Vk ~ 1,
!~(-) ~ fo(·), HU ~ !;(·) in £P(O), k--> oo.
(2.9)
Here, we may assume that (E2) and (2.5) hold for the smooth data. By the classical Schauder estimate ([21]), we know that there exists a classical solution yk(·) for the approximating problem. Then, by the assumptions on the smooth data, we see that
n
IIYkllw;·P(f!):::; c(llf~IILP(!1) + L llhkiiLP(n)), Vk ~ 1, (2.10) i=l
with C being independent of k ~ 1. Hence, we may assume that
(2.11)
Then, it is easy to show that y(·) E W~'P(O) is the unique solution of (2.3) and the estimate (2.7) holds. Now, let f; = 0 for 1 :::; i :::; n and let 1 < p < n. Let
1 ( • )' y(-) E Wci''P(O) be the solution of (2.3) and let v E W0 ' P (0) be the solution of the problem
{ A'v ~ t,(h;)._ in D'(fl),
vlan- O,
(2.12)
with h · E £(P* )' (0) (p* = ...!!:]!_ (p* )' = np ) and ' n-p' np n+p
n n
A*v(x) =- L 8x,(a;j(x)8xJv(x))- L Bx.(c;(x)v(x)) i,j=l i=l
(2.13) n
- L b;(x)ax,v(x) + d(x)v(x). i=l
Noting (2.6), by the proved fact, we know that such a v(·) exists and is unique. Then, we have (note, by direct computation, p' = ((p*)')*)
:::; CllfoiiLP(!1)11vllw~.(p*J'(n):::; CllfoiiLP(!1) L llh;IIL<P*l'(n). i=l
Thus, (2.8) follows. Finally, the case p = 1 can be proved similar to [8, 31, 32].
1312 JIONGMIN YONG
We note that condition(2.6) has nothing to do with the existence and the uniqueness of the solution to (2.3) and the estimate (2.7). It is needed only for obtaining (2.8) because the adjoint equation (2.12) has to be uniquely solved. Combining (2.7) and (2.8), we see that for any p > n/(n -1), the solution y(·) of (2.3) actually satisfies
n
11YIIw~·P(!1)::::; C{llfoiiLPTn(!1) + L llliiiLP(!1)}· i=l
(2.15)
Now, we let n n
Ay(x) =- L Ox;(a;j(x)ox,Y(x)) + :Lox.(b;(x)y(x)) i,j=l i=l
(2.16) n
+ :L c;(x)Ox,y(x) + J(x)y(x) i=l
be any elliptic operator with the coefficients a;j(·), b;(·), c;(·), d(·) satisfying (2.2), (2.5)-(2.6). We introduce
It is clear that II · llo is a norm (in an appropriate space). We have the following approximating result.
Lemma 2.3. Let A be the operator given by (2.16) with the coefficients satisfying (2.2), (2.5)-(2.6). Let p > nf(n -1) and fo(-), fo(-) E L~(O), /;(·), j;(·) E LP(O) (1 ::::; i ::::; n ). Let y(-) be the unique solution of
n (2.19) + L llf;(·)- J;(-)IILP(!1) + IIA- AlloiiYUIIw 1 ·P(!1)},
i=l
where C > 0 is a constant only depending on the bounds of a;j(·), b;(-), c;(·), d(·), the ellipticity constant o: and the modulus of continuity for a;j ( ·),
such that the unique solution y( ·) of (2.3) can be represented as
(2.22)
(2.23)
y(x) = L G(x,Ofo(Od~-t L Gdx,OJ;(~)d~, a.e. x E 0, (2.24)
provided fa(·) E LP(O) with p E [1, oo) and f;(·) E LP(O), 1 :::; i :::; n, with p E (1,oo). Moreover, if for some>. E (0, 1),
{ a;j(·), b;(·) E C1•A(O),
c;(·), d(·) E CA(O),
then for some C > 0,
1 :::; i, j:::; n, (2.25)
Proof. First of all, we consider the case J;(-) = 0, 1:::; i:::; n, and fo(·) E LP(O) C £ 1 (0) with p E [1,oo). Let q E (1, (n~l)). By (2.8), we see that the operator
1314 JIONGMIN YONG
T: £ 1(0)--+ Wi'q(O), fo(-) ~---+ y(·), is linear and bounded. We notice that Wi'q(O) is a reflexive Banach space and thus it possesses the Radon Nikodym Property (14]. Hence, by [14], we can find a G(·, ·) E L 00 (0; Wci"'q(O)) such that
y(x) =in G(x,Ofo(~)d~, a.e. x E 0. (2.27)
Similarly, by considering
{ A*v(~ = h0 (x) in V'(O),
vlan- O, (2.28)
with A* being the adjoint operator of A, we can find a G*(-, ·) E £ 00 (0; W01 'q(O)) such that for h0 (-) E L 1 ( 0), we have
v(x) =in G*(x,Oho(Od~, a.e. x E 0. (2.29)
Now, we let fo(·), h0 (-) E £ 2(0). Then, we see immediately that
in in G(x,~)fo(~)ho(x) d~ dx =in in G*(~, x)fo(~)h0 (x) d~ dx. (2.30)
Thus, we have G(x,O = G*(~,x), a.e. (x,O E 0 X 0. (2.31)
Then, (2.22) is satisfied under the convention that
\l~G(x,O = \l~G*(~,x) in V'(O x 0) (2.32)
and the first inequality in (2.23) follows from Lemma 2.1 and (2.27) as well. Next, we let fo(·) = 0 and ](-) = (]t(-),]2 (·), ..• ,in(·)) E CQ"'(O)n. Let jj(-) be the corresponding solution of (2.3). Then, by the proved result, we have
jj(x) =in G(x,~)\7~ · j(~)d~, a.e. x E 0.
Now, for any h(·) E CQ"'(O), by (2.7), we have
lin in G(x,0\7~ · ](~)h(x) d~ dxl = 11 jj(x)h(x) dxl
S Cllh( ·)II LP1 (!1) lifO JILP(Q)n •
Thus, the second inequality in (2.23) follows (note (2.32)) and
(2.33)
(2.34)
in jj(x)h(x) dx =-in in \l~G(x,~) · ](0 d~ h(x) dx, Vh(·) E Cg'"(O). (2.35)
Hence,
jj(x) =-in \l~G(x,o. ](Od~, a.e. X E 0. (2.36)
PONTRYAGIN MAXIMUM PRINCIPLE 1315
By setting];(-)--+ J;(·) in £P(O), noting (2.23), we obtain the representation of the solution y(·) of (2.3) with p > 1 and fo = 0 as follows:
(2.37)
Then by the linearity of the equation, we obtain (2.24). Finally, by (14], we obtain (2.26), provided (2.25) holds.
It is clear that the function G(·, ·) obtained in the above lemma can be regarded as the Green's function of Problem (2.3). The point is that, unlike the classical case as in (29], the above G(·, ·) has less regularity due to the fact that the coefficients of the operator A are not regular enough. In this case, we do not know if (2.26) holds. As in (29], assuming (2.25) ensures (2.26). The following result is crucial in the sequel.
Lemma 2.5. Let (2.26) hold. Let fo(·) E LP(O) with p > n. Then, for any p E (0, 1], there exists a measurable set Ep C 0 with
(lSI is the Lebesgue measure of the setS) and
~ r G(x,~)fo(~)XEp(~) d~ = r G(x,~)fo(~) d~ + rP(x), a.e. X En, (2.39) Pin in with
(2.40)
Generically, this result belongs to vector measure theory. It is very closely related to the so-called Liapunoff's Convexity Theorem if we take W~'P(O) as the Banach space under consideration (14]. See Remark 2.10 for further comments.
Proof. Define rP(x) through (2.39). Then, we have
r~; (x) = ~ { G.,; (x, ~)fo(~)XEp (0 d~- { G.,; (x, ~)fo(~) df (2.41) Pin in By (2.26), we know that since p > n, p' = pf(p- 1) < n/(n- 1). Thus,
r { r , }1/p' Jn IGx; (x, •Ofo(~)l d~ $ Jn IGx; (x, ~)IP d~ ll!o(·)llu(n) (2.42)
:::; Cllfo(·)IILP(n).
This means G.,;(·, ·)fo(·) E L 00 (0.,; P(Oe)). Thus, we can find a 0(·, ·) E C(O x 0) such that
2.2. Maximum principle. Now, let us return to problem (2.1). We make the following further assumption.
(E3) The maps f, f 0 : 0 X lR X U --+ lR satisfy the following:
(i) for all (y,u) E lR xU, f(·,y,u) and f 0 (·,y,u) are measurable;
(ii) for all (x, u) E OxU, f(x, ·, u) and f 0 (x, ·, u) are continuously differentiable;
(iii) for all (x,y) E 0 X IR, f(x,y,·), jD(x,y,·), fy(x,y,·) and f~(x,y,·) are continuous.
Moreover, there exists a constant L > 0 such that for all ( x, y, u) E 0 x IR x U,
lf(x, 0, u)l, lf0 (x, 0, u)l, lf~(x, y, u)i ~ L,
-L ~ jy(x, y, u) ~ 0.
(2.47)
(2.48)
Combining the argument used in [6) and the above Lemma 2.1, we are able to prove the following result.
Proposition 2.6. Let (E1)-(E3) hold. Then, for any p E [1, oo) and any u(·) E U, (2.1) admits a unique solution y(·) = y(·; u(·)) E W~·P(O) and
IIY(·; u(·))llwt.v(n) ~ C, \fu(·) E U, (2.49)
where Cis a constant independent of u(·) E U.
We note that (E3) can be slightly relaxed (see [6)). If we relax (E3) similar to [6), then the p in the above proposition will have some restrictions. We prefer not to get into such a generality because the main result will essentially be the same. By Sobolev's embedding theorem, we know that the solution y( ·; u( ·)) obtained in Proposition 2.5 is actually in C 6 (0) for some b E (0, 1). Concerning the state constraint, we introduce the following:
(E4) Y is a separable Banach space containing W~·P(O) for some p > 1 and Q is a closed and convex subset of Y.
PONTRYAGIN MAXIMUM PRINCIPLE 1317
Hereafter, by saying y contains w~·P(fl), we mean that the embedding w~·P(fl) '---+ Y is continuous. We see that under (E1)-(E3), for any u(-) E U, the solution y(·; u(-)) of (2.1) is in Y. Thus, a state constraint of type (1.3) makes sense. We should keep in mind that one of the most interesting examples of Y is C(O). Now, we let A be the set of all pairs (y(-), u(·)) E Y XU satisfying (2.1). Then, by (2.47), we see that J(y(·), u(·)) is well-defined. We let AQ be the set of all pairs (y(·), u(-)) E A such that the state constraint (1.3) holds. We will see that the notations A and AQ have different meanings in different contexts since the state equations could be different from time to time. But, this will not cause any ambiguity. Now, we suppose (y(·), u(-)) E AQ solves Problem SE; i.e., the following holds:
J(y(·), u(·)) = inf J(y(·), u(·)). AQ.
(2.50)
For any u(·) E U, we let z(·) = z(·; u(·)) E W~·P(fl) C Y be the unique solution of the problem
{
Az(x) =fy(x, y(x), u(x))z(x) + f(x, y(x), u(x))
- f(x, y(x), u(x)) in V'(rl),
zlan =O.
(2.51)
This system is referred as the variational system along the optimal pair (y(-), u(.)). We define
R = {z(·; u(-)): u(·) E U}. (2.52)
We call this set the reachable set of the variational system (2.51 ). It is clear that R c Y. We denote by R- Q the algebraic difference of the sets R and Q. In the statement of the main result of this section, the following notion is necessary.
Defintion 2. 7. Let Z be a Banach space. A set S is said to be finite co dimensional in Z if there exists a point z E S such that Z0 = span ( S- z) is a finite co dimensional subspace of Z and co(S- z) has a nonempty interior in Z0 .
Next, let us define the Hamiltonian.
H(x, y, u, 'ljJ 0 , '1/J) = 'ljJ0 j 0 (x, y, u) + '1/Jf(x, y, u),
V(x,y,u,'ljJ0 ,'1jJ) En X JR. Xu X JR. X R
The main result of this section can be stated as follows.
(2.53)
Theorem 2.8. Let (E1)-(E4) hold. Let (y(·),u(-)) E AQ be an optimal pair of Problem SE. Let R- Q be finite codimensional in Y. Then, there exists a pair ('I/J0, '1/J) E (-1, OJ X w~·p' (rl) \ {0} and a <p E Y* c w-l,p' (rl) (1/p+ 1/p' = 1) such that
(rp, z(·)- y(·))y•,y::; 0, Vz(·) E Q,
{ A'ljJ(x) = jy(x, y(x), u(x))'I/J(x) + 'ljJ 0 f~(x, y(x), u(x))- rp in V'(rl),
'1/Jian = O,
H(x,y(x),u(x),'ljJ0 ,'1jJ(x)) = maxH(x,y(x),u,'ljJ0 ,'1jJ(x)),a.e. X En. uEU
(2.54)
(2.55)
(2.56)
1318 JIONGMIN YONG
We usually refer to (2.54), (2.55) and (2.56) as the transversality condition, the adjoint system (along the given optimal pair) and the maximum condition, respectively. We note that for p E (1,oo), one hasp' E (1,oo). Thus, by Lemma
2.1, problem (2.55) admits a unique solution in wg,p' (0). Then, (2.56) makes sense. An interesting case in our mind is that p E ( n, oo ), which allows Y to be C(O). In this case, p' E (1, n~1 ) and thus our Lemma 2.1 applies here. However, the results of [21-23, 29, 34], for example, only treated the cases p' > n, which are not enough here.
2.3. Proof of Theorem 2.8. This subsection is devoted to the proof of Theorem 2.8.
Proof of Theorem 2.8. For any (y(·; u(·)), u(·)) E A, we define
Here, for the time being, we assume that the dual Y* of Y is strictly convex. Then, similar to [26], we know that for any y( ·) ~ Q, odq (y( ·)) is single-valued. Thus, we can define ( rp0 •€, rp€) E [0, 1] x Y* as
Then, from (2.91), we see that
{- JEIOI ~ rpo'czo,c + (rp€,z€(·))y.,y,
I'P0 'cl 2 + II'P£111• = 1.
On the other hand, (we suppress the subscript in ( ·, ·) y• ,Y)
We see that the solution z(·) of (2.98) and z0 defined by (2.99) depend on the choice of u(·) E U. Thus, we denote them by z(·) = z(·; u(-)) and z0 = z0 (u(·)), respectively. Combining (2.93), (2.96) and (2.97), we have
'PO,c z0(u(-)) +('PC, z(·; u(·))- (q(·)- fi(·))) ~ -8E, Vq(-) E Q, u(·) E U, (2.100)
with 8c ___.. 0 as c ___.. 0. Now, since the set R- Q is finite codimensional in Y, as in [26], we can find a sequence along which
(2.101)
holds. From (2.101), we have
'P0 z0 (u(-)) + ('P, z(·; u(·))- (q(·)- fJ(·))) ~ 0, Vq(-) E Q, u(-) E U. (2.102)
Thus, if we let u(·) = u(·), then the above gives the transversality condition (2.54). Now, we let
'lj; 0 = -'P0 E [-1, OJ. (2.103)
By taking q( ·) = iJ( ·) in (2.102), we obtain
(2.104)
We note that 'P E Y* c w-l,p' (0). Thus, by Lemma 2.1, (2.55) admits a unique
solution 'lj;(-) E W~·P' (0). By some direct computation, we can reduce (2.104) to
in {'l/J 0 [!0 (x,fj(x),u(x))- f 0 (x,y(x),u(x))]
+'l/J(x)[f(x, jj(x), u(x))- f(x, jj(x), u(x))} dx ~ 0, Vu(·) E U.
(2.105)
Hence, (2.56) follows. From (2.55), we see that the pair ('1j;0 ,'l/J(·)) # 0. Otherwise, we would end up with ('Po, 'P) = 0 which contradicts (2.101). Thus, we complete the proof for the case that Y* is strictly convex. Now for the general case. By the separability of Y, we know that there exists an equivalent norm under which the dual is strictly convex ([13], see [26] also). This fact can be stated as follows. Let I · h be the original norm of Y and I · 12 be the new norm of Y under which the dual of Y is strictly convex. Then the equivalence of these two norms implies that
PONTRYAGIN MAXIMUM PRINCIPLE 1323
there exists a linear bounded operator T: (Y, I· h)--+ (Y, l·l2) such that for some constant C,
(2.106)
Then we start with the problem in (Y, I · l2) and with the state constraint set TQ (instead of Q). We see that TQ is still closed and convex in Y. Then, we can still obtain (rp0 ,rp) E (0, 1] x (Y, 1·12 )*, not being zero, such that
rp0 z0 (u(·)) + (rp, z(-; u(·))- (h(·)- y(-)))2 2: 0, Vh(·) E Q, u(-) E U, (2.107)
where (-, ·)2 represents the duality between (Y, l·l2) and (Y, l·l2)*. Then, via the operator T, we can go back to (Y, 1·11) to obtain all the conclusions.
Remark 2.9. We should note that in the proof of Theorem 2.8, we have only used Lemmas 2.1 and 2.3 with b; = c; = J; = 0. By (31], it is easy to see that in this case, to have the presentation (2.17), we only need a;j(·) E L 00 (0). Thus, if we arc dealing with the case that Y ::::> W~'2 (0), then the solvability of the adjoint system can be obtained only via the L2 theory. As a consequence, a;j(·) E L 00 (0) is enough. However, if Y ::::> W~'P(O) with p > 2, then, to ensure the solvability of the adjoint system (2.55), we do need a;j(·) E C(O). Also, it is easy to see that our
result holds for the case when the operator A is replaced by A ofform (2.4). In this case, the only difference is that in the adjoint equation (2.55), A is replaced by A*. Remark 2.10. Due to the fact that U is not necessarily convex, we need to use the spike variation for the control. This idea originated in (30] for finite dimensional cases. Here, the modified spike variation technique for infinite dimensional cases was first used in (24] for some linear elliptic systems problems with no state constraints and in [25] for abstract evolution systems (see [16, 26, 35] also). We have seen that the Green's function for elliptic equations and the vector valued measure theory, which leads Lemma 2.5, play crucial roles in our approach. We should note that due to the state constraint (1.3), we have to work on the penalty functional Fe in which dQ(·), a distance induced by the norm of Y, is involved. Thus, we do need the expansion (2.88). To obtain this, we have to have the remainder term rP(·) appearing in (2.39) satisfying (2.40). As far as Ekeland's variational principle is concerned, it is now a standard approach for treating problems with constraints.
2.4. The state constraint sets and the properties of rp. In this subsection, we look at various possibilities for the state constraint sets Q C Y covered by our result. We will also make some analysis of the functional rp. By the way, it is important to know that [26] if one of Q and n is finite codimensional in Y, so is n-Q.
1) Let Y = C(O), x; E 0, a; E IR, 1::; i::; m. We take
This gives finitely many pointwise constraints for the states. It is easy to see that the codimension of Q in Y satisfies
codimyQ ::; m. (2.109)
1324 JIONGMIN YONG
Thus, our result is applicable. In this case, it is clear that <p is a signed measure with the support on the set {xi: 1:::; i:::; m}. This is a case discussed in [5, 9].
2) Let Y = C(D) and let
Q = {y(·) E C(D): y(x) 2:: 0, Vx E 0}. (2.110)
It is clear that codimyQ = 0. (2.111)
In this case, <p is a nonpositive valued measure concentrated on the set { x E 0 : y(x) = 0}.
3) Let Y = LP(O), 1 :::; p < oo, a; E IR, hi(-) E £P' (0), 1 :S i:::; m, 1/p+ 1/p' = 1. We take
and suppose it is nonempty. For such a Q, we also have (2.109). In this case, <p E £P' (0) with the property that
1 <p(x)w(x) dx:::; 0, Vw(·) E £P(O)
with 1 w(x)h;(x) dx = 0, 1:::; i:::; m.
Thus, we have m
<p(x) = L A;h;(x), a. e. x E 0, i=l
with some A; E IR, 1 :::; i :::; m.
4) Let Y = W 1·P(0), 1 < p < oo. We take
For such a Q, (2.111) holds. In this case, the <p has no obvious expression.
5) Let Y = C(D), h;(·) E L1 (0) and a; E IR (1:::; i:::; m) satisfying
Let
Q = {y(-) E Yl1 y(x)h;(x) dx =a;,
1:::; i:::; m, ly(x)l:::; 1, Vx E 0}.
(2.113)
(2.114)
(2.115)
(2.116)
(2.117)
By the Hahn-Banach Theorem [38), we know that under condition (2.116), there exists a iJ(-) E L 00 (0) such that
1 iJ(x)h;(x) dx =a;, Vi, 1::::; i::::; m. (2.118)
PONTRYAGIN MAXIMUM PRINCIPLE 1325
Now, we suppose a little more, that the set Q is nonempty (i.e., the momentum problem (2.118) admits a solution in Y). Then, we claim that the codimension of Q in Y is no more than m. To show this, we letT: L 00 (0) ___, JR.m be defined by
Tz(-) = 1 z(x)h(x)dx, 'iz(-) E L 00 (0), (2.119)
with h(·) = (h1 (·), ... ,hm(-)). Denote a= (a1 , ... ,am)· By the nonemptiness of Q, we see that there exists a z(·) E r- 1 (a). Then, we see that r- 1 (a)- z(·) is a closed subspace in L 00 (0). Let
By the Hahn-Banach Theorem [38] again, there exists an f E L 00 (0)*, with unit norm, such that
J( -z(·)) = d,
f(T- 1 (a)- z(·)) = o.
Then, by the definition of r- 1(a)- z(-), we see that
m
fly= L A;h;(-) dx i=1
for some A; E IR (1 ~ i ~ m). Then, by condition (2.116), we have
(2.121)
(2.122)
(2.123)
d =-fA; lz(x)h;(x) dx =-f A;a; < llf A;h;(x) dxl ~ lifll = 1. (2.124) i=1 n i=1 °i=1
Hence, there exists a z{) E T- 1 (a) n Y such that
lz(x)l < 1, 'ix E 0. (2.125)
Then, as in example 3), we can conclude that (2.109) holds. A similar case like this was mentioned in [5].
We have seen that it is possible for us to cook up various feasible state constratints. We notice that, roughly speaking, the constraint set Q will be finite codimensional in Y provided it is constructed through a finite number of equality constraints and possibly any number of inequalities (which should not imply infinitely many equalities, e.g., y(x) ~ 0 and y(x) ~ 0 for all x E 0 will imply y(x) = 0 for all x E 0; this is not allowed). It would be interesting to have an example for which R - Q is finite co dimensional in Y, but neither R nor Q is. We are not able to construct such an example at the present time.
3. Optimal control of semilinear variational inequalities. This section is devoted to the study of Problem SV; namely, the problem governed by semilinear variational inequalities with distributed controls and state constraints. For convenience, let us rewrite (1.1) here,
{ Ay(x~ + ,8(y(x)) 3 f(x, y(x), u(x)) in D'(O),
Ylan- 0. (3.1)
1326 JIONGMIN YONG
Let us make the following assumptions, which are comparable with (E1)-(E4).
(V1) The same as (E1)
(V2) The operator A is defined by (1.2) with a;j(·) E C 1 (0), 1 ::; i, j ::; n, and for some o: > 0, (2.1) holds.
(V3) j3 C IR X IRis a maximal monotone graph with 0 E Dom (f3). (V4) The same as (E3).
(V5) Y is a separable Banach space containing W~'2 (0) and Q is a closed and convex subset of Y.
Here, for the operator A, we have assumed a little more than that in §2. This will give a little more regularity of the solution y(-) of (3.1). We also see that Y is different from the one given in §2. This is not satisfactory since it excludes Y being C(O) (for n 2: 2). And at the present time, we are not able to obtain similar results for variational inequalities with Y being the same as that given in §2. We will be back to this point later. As in [7], we have the following basic result.
Proposition 3.1. Let (V1)-(V4) hold. Then, for any p E [1, oo) and any u(·) E U, (3.1) admits a unique solution y(·) = y(·; u(·)) E W2 ·P(0) n W~'2 (0) and
IIY(·; u(·))llw2,p(n) ::; C, Vu(·) E U, (3.2)
where C is a constant independent of u( ·) E U.
We let A be the set of all pairs (y(·),u(·)) E Y xU satistying (3.1) and let AQ be the set of all the pairs (y(-), u(·)) E A with the state constraint (1.3) being satisfied. Our next assumption is concerned with the stability of the optimal cost with respect to the variation of the state constraint.
The above (V6) is technically necessary in our approach due to the fact that we have to regularize the state equation in the presence of a state constraint. This condition was used in [36] for treating problems involving nonsmooth evolution systems with state constraints. Such a condition restricts the generality of our problem. However, for many interesting problems, (V6) actually holds. Let us point out some of them. The easiest one is the case Q = Y; i.e., there is no state constraints. In this case, A= AQ and thus (3.4) is trivially valid. Second, let U be a convex set in !Rm, J0 (x, y, u) be convex (and coercive if U is unbounded) in u E U and
f(x,y,u)=h(x,y)+h(x,y)u, V(x,y,u)EOxiRxU. (3.5)
Then, for any (Yk(·), uk(·)) E A with (3.3) holding, we may assume
{
uk(·) ~ u(-) in L00 (0,1Rm),
Yk(·) ~ i)(-) in W2·P(f2), p 2: 1,
Yk(·).!... f)(-) in C(O),
(3.6)
PONTRYAGIN MAXIMUM PRINCIPLE 1327
and (f)(·), u(·)) E AQ· Thus, by the convexity of f 0 (x, y, ·) and Mazur's Theorem [38) together with Fatou's Lemma, we see that
Thus, (V6) holds for this case. Finally, let us give the following more general result.
Proposition 3.2. Let (V1)-(V5) hold. Let
A(x, y) = {(>.0 , >.) E JR2 : >.0 2: f 0 (x, y, u), >. = f(x, y, u), for some u E U}. (3.8)
Assume that for any y( ·) E Y, x ~-----* A( x, y( x)) is a measurable multifunction [12, 37)( this is the case if A(-,·) is, say, upper semi-continuous [4, 12, 37)) and takes convex and closed set values. Then, (V6) holds.
Proof. For any (Yk(·), uk(·)) E A, by (V1)-(V4), we may let
Then, it is clear that
{
Yk(-) ~f)(·) in C(O),
f(-, Yk(·), uk(·)) ~ ](-) in LP(O),
j 0 (·, Yk(·), uk(·)) ~ ]D(-) in £P(O).
{ ~f}(x~ + (3(f}(x)) 3 ](x) in V'(O),
Ylan- 0.
On the other hand, by Mazur's theorem, we have
(3.9)
(3.10)
(i0 (x),i(x)) E coA(x, f}(x)) = A(x, y(x)), a.e. x E 0. (3.11)
Then, by Filippov's Lemma [4, 11, 12), there exists a u( ·) E U such that
{ ~0 (x) 2: f 0 (x, f)(x), u(x)),
f(x) = f(x, f)(x), u(x)), a.e. x E 0. (3.12)
Hence, if we have (3.3) for the sequence {Yk(·)}, then, from (3.10) and (3.12), we obtain
(f)(-), u(·)) E AQ· (3.13)
Thus, it follows from Fatou 's Lemma that
lim J(yk(·), Uk(·)) 2: r ]0 (x) dx 2: r j 0 (x, f}(x), u(x)) dx k-+oo Jn Jn (3.14)
= J(f)(·), u(-)) 2: ~~ J(y(-), u(·)).
This gives (V6)
We see that the condition assumed in the above proposition is essentially the same as that assumed in the result for the existence of optimal pairs [4, 11, 37). This shows that (V6) is very closely related to the existence theory and thus, in some sense, it is a reasonable hypothesis. We will see the role played by (V6) in the proof of our main result of this section. As stated in §1, Problem SV is concerned with finding a pair (y(·), u(-)) E AQ such that the cost functional J(y(·), u(·)) given by (1.6) is minimized. Our main result of this section is the following theorem.
1328 JIONGMIN YONG
Theorem 3.3. Let (V1)-(V6) hold and Q be finite codimensional in Y. Let (y(·), u(·)) E AQ be an optimal pair. Then, there exists a pair ('l/J0 , cp) E ([-1, OJ x
Y*) \ {0}, a '1/J(-) E W~'2 (0) and a J-l E Y* such that
(cp, z(·)- y(·))y•,y::; 0, Vz(·) E Q,
{ A'ljJ(x) + J-l = /y(x,y(x),u(x))'l/J(x)
+ 'I/J0 f 0 (x, y(x), u(x))- cp in V'(O),
'1/Jian = 0,
(3.15)
(3.16)
H(x,y((x),u(x),'ljJ0 ,'1jJ(x)) = maxH(x,y(x),u,'ljJ0 ,'1jJ(x)), a.e. x E 0, uEU
(3.17)
with the Hamiltonian H being given by (2.35). Moreover, if
Before getting into the proof, let us introduce the following regularization of the state equation. For c > 0, we consider the semilinear equation
{ Ay,(x~ + ,B,(y,(x)) = f(x, y,(x), u(x)) in 0,
Yelan- 0. (3.20)
Here, ,B., : Dom (,B.,) C IR -+ IR is a smooth nondecreasing function satisfying
{ Dom (,B.,) 2 Dom (,B),
(3.21) z1 ::; ,B,(y) ~ z2, Vz1 E ,B(y- c), z2 E ,B(y +c).
From [7], such a ,B.,(·) exists. By (V1)-(V4), for any u(·) E U and p E [1, oo), there exists a unique y,(-) :: y,(·; u(·)) E W 2·P(0) n W~·2 (0) solving (3.20). Moreover, we have the following result.
Proposition 3.4. For any p E [1, oo), there exists a constant C > 0 such that for any u(·) E U and any c > 0, the solutions y,(-) andy(·) of (3.20) and (3.1) satisfy
IIYeOIIw•.v(n) ~ C,
IIYe(·)- y(·)IIL=(n)::; c,
IIYe(·)- y(·)IIW'·'v(n)::; Cy'c.
(3.22)
(3.23)
(3.24)
Proof. By [7], we obtain (3.22) and (3.23). Then, by Nirenberg's inequality [1], we have
Thus, le = ~l!f J(ye(·), u(·))::::; J(ye(·), u(·)).
Q.
It follows that (by Proposition 3.4)
lim le::::; J(y(·),u(·)) = inf J(y(·),u(·)) = J. e-->0 Aq
On the other hand, for any c > 0, one can find .(Ye(·), ue(-)) E AQ. such that
(3.28)
(3.29)
(3.30)
(3.31)
(3.32)
(3.33)
(3.34)
1330 JIONGMIN YONG
Let (y,.(·), u,.(·)) E A (i.e., YeO is the solution of (3.1) corresponding to u,.(·)). Then, by Proposition 3.4, we have
111/,.- y.,llw1,2(!1) ~ Cy'c, Vc > 0.
Hence, by y.,(-) E Q., and (3.35), we see that
Then, by (V6), we obtain
Using the continuity of f 0 , (3.34) and (3.35), we obtain
lim J,. ;::: lim J(ye(·), u"(·)) ;::: J.
Then, (3.30) follows from (3.33) and (3.38).
(3.35)
(3.36)
(3.37)
(3.38)
Proof of Theorem 3.3. Let c > 0 be fixed. For any u(·) E U, let y.,(·; u(·)) be the solution of (3.20) corresponding to u(·), which is unique. We define
Hence, by Ekeland's variational principle, there exists a ue ( ·) E U such that
{ d(u"(-),u(·)) ~ ..;;w, F,.( ue(·)) ~ Fe(u( ·)),
Fe( u( ·)) -Fe( u"(·)) ~ -Ja'{c)J( u"(·), u(·)), Vu(·) E U.
(3.41)
(3.42)
(3.43)
Then, almost the same as in the previous section, we obtain the following (note c > 0 is fixed). Let ye(-) = y.,(·; u"(·)) and y0 •"(-) = y~(ue(·)). Then,
- H(x, yc(x), uc(x), 1/J2, 1/Jc(x))] dx, 'v'u(·) E U,
1331
(3.45)
(3.46)
(3.47)
(3.48)
(3.49)
with the Hamiltonian H being defined by (2.35). This can be regarded as an approximating maximum condition. Our next goal is to take the limits to get the final result. To this end, we notice that 'Pe E Y* c w- 1 •2 (0); thus, from (3.48) and (3.44), we obtain
(3.50)
This implies 'v'c > 0. (3.51)
Then, from (3.48), we get
11/3~ (ye)1/Je II w-t,2(f1) :::; C. (3.52)
Hence, we may let 1/Jc ~ 1/J in W 1 •2 (0),
j3~(ye)1/Je ~ fJ in w- 1 •2 (0), (3.53)
1/Jo 1/Jo e -+ '
'Pe ~ 'P in Y*.
1332 JIONGMIN YONG
Clearly, 'ljJ satisfies (3.16). By the finite codimensionality of Q and (3.44), we know that the pair ( ¢ 0 , cp) E IR x Y* is nontrivial. Letting E --+ 0 in (3.49), we obtain (3.17). Finally, if (3.18) holds, then together with (3.15), we see that (3.19) holds.
We see that in the last step of the proof, we need some estimates on 'lj;E; and f3~(yE;)'IjJE; that are uniform in E. We are only able to obtain such estimates, like (3.51) and (3.52), using L 2 theory. This is the reason that we have to restrict ourselves in the caseY ;:2 W~'2 (0). Also, we see that (V6) ensures the convergence JE; --+ J, which leads to u(c) --+ 0 in (3.42). This is crucial in applying the Ekeland variational principle.
Now, let us give an example to illustrate the meaning of condition (3.18). We let 1 2 -
n = 1. Then, Y :J W0 ' (0) :J C(O). We let
Q = {y(·) E Y: y(x;) =a;, 1 ~ i ~ m}, (3.54)
with x; E 0 C IR and a; E IR (1 ~ i ~ m). Then, from (3.18), we see that (note 1-1 E w- 1•2 (0) C C(O)*) 1-1 is a signed measure with
J.l({x;}) = 0, 1 ~ i ~ m. (3.55)
On the other hand, from (3.15), we know that cp E Y* C C(O)* is also a signed measure with
suppcp C {x;: 1 ~ i ~ m}. (3.56)
Thus, (3.18) means that supp cp n supp 1-1 = 0. (3.57)
We also see that (by (3.53) and the convergence yE;(-)--+ y(-) in C(O))
supp J-1 C {x E 0: y(x) E Jump (/3)}, (3.58)
where Jump (/3) is the set of all points in Dom (/3) at which f3 is not single-valued. Thus, (3.57) is implied by
{X; : 1 ~ i ~ m} n { x E 0 : y( X) E Jump ({3)} = 0, (3.59)
or y(x;) ~Jump (/3), 1 ~ i ~ m. (3.60)
A condition similar to (3.60) was used in [33].
Acknowledgment. This work was partially supported by the NSF of China under grant 0188416 and the Chinese State Education Commission Science Foundation under grant 9024617. Partial work of this paper has been done while the author was a visiting scholar of INRIA Sophia-Antipolis and Rocquencourt, France. The author would like to thank Professors P. Bernhard, J.F. Bonnans and G. Chavent for their hospitality and many interesting discussions. The stimulating and suggestive discussions with Professor J.F. Bonnans on this work deserve a special acknowledgment.
PONTRYAGIN MAXIMUM PRINCIPLE 1333
REFERENCES
[1) A.R. Adams, "Sobolev Spaces," Academic Press, New York, 1975. [2) S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic
partial differential equations satisfying general boundary conditions I, Comm. Pure Appl. Math., 12 (1959), 623-727.
[3) V. Barbu, "Optimal Control of Variational Inequalities," Pitman, Boston, 1984. [4) L.D. Berkovitz, "Optimal Control Theory," Springer-Verlag, New York, 1974. [5) J.F. Bonnans and E. Casas, Optimal control of semilinear multistate systems with state
constraints, SIAM J. Control & Optim., 27 (1989), 446-455. [6) J.F. Bonnans and E. Casas, Un principe de Pontryagine pour le controle des systemes
semilineaires elliptiques, J. Diff. Eqn., 90 (1991), 288-303. [7) J.F. Bonnans and D. Tiba, Pontryagin's principle in the control ofsemilinear elliptic varia
tional inequalities, Appl. Math. Optim., 23 (1991), 299-312. [8) H. Brezis and W.A. Strauss, Semilinear second-order elliptic equations in L 1 , J. Math. Soc.
Japan, 25 (1973), 565-590. [9) E. Casas, Control of an elliptic problem with pointwise state constraints, SIAM J. Control
& Optim., 24 (1986), 1309-1318. [10) E. Casas and L.A. Fernandez, Distributed control of systems governed by a general class of
quasilinear elliptic equations, preprint. [11) L. Cesari, "Optimization Theory and Applications, Problems with Ordinary Differential
Equations," Springer-Verlag, New York, 1983. [12) F.H. Clarke, "Optimization and Nonsmooth Analysis," Wiley, New York, 1983. [13) M.M. Day, Strict convexity and smoothness of normed spaces, Trans. Amer. Math. Soc., 78
(1955), 516-528. [14) J. Diestel and J.J. Uhl, Jr., "Vector Measures," AMS, Providence, R.I., 1977. [15) I. Ekeland, Nonconvex minimization problems, Bull. Amer. Math. Soc. (New Series), 1
{1979), 443-474. [16) H.O. Fattorini, A unified theory of necessary conditions for nonlinear nonconvex control
systems, Appl. Math. & Optim., 15 (1987), 141-185. [17) H. Frankowska, The maximum principle for an optimal solution to a differential inclusion
with end point constraints, SIAM J. Control & Optim., 25 (1987), 145-157. [18) A. Friedman, Optimal control for variational inequalities, SIAM J. Control & Optim., 24
(1986), 439-451. [19) A. Friedman, S. Huang and J. Yong, Bang-bang optimal control for the dam problem, Appl.
Math. Optim., 15 (1987), 65-85. [20) A. Friedman, S. Huang and J. Yong, Optimal periodic control for the two-phase Stefan
problem, SIAM J. Control & Optim., 26 (1988), 23-41. [21) D. Gilbarg and N.S. Trudinger, "Elliptic Partial Differential Equations of Second Order,"
2nd edition, Springer-Verlag, 1983. [22) D. Kinderlehrer and G. Stampacchia, "An Introduction to Variational Inequalities and Their
Applications," Academic Press, New York, 1980. [23) O.A. Ladyzhenskaya and N.N. Ural'tseva, "Linear and Quasilinear Elliptic Equations,"
Academic Press, New York, 1968. (24) X. Li, Vector-valued measure and the necessary conditions for the optimal control problems
of linear systems, Pro c. IFAC 3rd Symposium on Control of Distributed Parameter Systems, Toulouse, France, 1982.
[25) X. Li andY. Yao, Maximum principle of distributed parameter systems with time lags, "Distributed Parameter Systems," Lecture Notes in Control and Information Sciences, SpringerVerlag, New York, vol. 75, 410-427, 1985.
[26) X. Li and J. Yong, Necessary conditions of optimal control for distributed parameter systems, SIAM J. Control & Optim., 29 (1991), 895-908.
[27) J.L. Lions, "Optimal Control of Systems Governed by Partial Differential Equations," Springer-Verlag, New York, 1971.
[28) J.L. Lions, "Control de systemes distribues singuliers," Dunod, Paris, 1983. [29) C. Miranda, "Partial Differential Equations of Elliptic Types," Springer-Verlag, New York,
1970.
1334 JIONGMIN YONG
[30) L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze and E.F. Mischenko, "Mathematical Theory of Optimal Processes," Wiley, New York, 1962.
[31) G. Stampacchia, Some limit cases of LP-estimates for solutions of second order elliptic equations, Comm. Pure App. Math., 16 (1963), 505-510.
[32) G. Stampacchia, Le probleme de Dirichlet pour les equations elliptiques du second order a coefficients discontinus, Ann. Inst. Fourier Grenoble, 15 (1965), 189-258.
[33) D. Tiba, Boundary control for a Stefan problem, "Optimal Control of Partial Differential Equations," K.-H. Hoffmann and W. Krabs eds., Birkhauser Verlag, 1983.
[34) G.M. Troianiello, "Elliptic Differential Equations and Obstacle Problems," Plenum Press, New York, 1987.
[35) Y. Yao, Optimal control for a class of elliptic equations, Control Theory & Appl., 1 (1984), 17-23 (in Chinese).
[36) J. Yang, A maximum principle for the optimal controls for a nonsmooth semilinear evolution system in "Analysis and Optimization of Systems," A. Bensoussan and J.L. Lions eds., Lecture Notes in Control & Inform. Sci., vol. 144, Springer-Verlag, 1990, 559-569.
[37) J. Yang, Existence theory for optimal control of distributed parameter system, Kodai Math. J., to appear.
[38) K. Yosida, "Functional Analysis," 6th ed., Springer-Verlag, Berlin, 1980.