Here we have used the Definition 3235 of the projection P loch which implies thatint tntnminus1〈rp vht〉dt = 0
and (rnp+ vn) = 0 Setting vh = r1h isin Uh into (3271) using the incompressibility constraint to writeint tntnminus1 b(r1h φminus φ1h) =
Summing inequalities (3272) we obtain the estimate at partition points and at the L2[0 T H1(Ω)]using triangle inequality Once the estimate for rL2[0T H1(Ω)] is obtained the estimate atLinfin[0 T L2(Ω)] follows using the arguments of Theorem [32 Theorem 47] modified to handlethe backwards in time Stokes equationEstimates (3) and (4) We turn our attention to the last two estimates In order to obtain theimproved rate for the L2[0 T L2(Ω)] norm we employ a duality argument to derive a better boundfor the quantity e1h2L2[0T L2(Ω)] For this purpose we generalize the duality argument of theproof of [14 Section 3] or [30 Lemma 43] in order to handle arbitrary order schemes and thediscrete incompressibility constraint We define a backwards in time evolutionary problem withright hand side e1h isin L2[0 T L2(Ω)] and zero terminal data ie for n = 1 N and for allv isin L2[0 T H1(Ω)] capH1[0 T Hminus1(Ω)] we seek (z ψ) isinW (0 T )times L2[0 T L2
Note that since e1h isin Linfin[0 T W(Ω)] then Remark 228 implies that the following estimate hold
The lack of regularity of the right hand side of (3273) due to the presence of discontinuities impliesthat we can not improve regularity of z in [0 T ] The associated discontinuous time-stepping schemecan be defined as follows Given terminal data zNh+ = 0 we seek (zh ψh) isin Uh timesQh such that for allvh isin Pk[tnminus1 tn Yh] qh isin Pk[tnminus1 tnQh]
(3275)Hence using Lemma 3123 we obtain zhLinfin[0T H1(Ω)] le Cke1hL2[0T L2(Ω)] It is now clear thatwe have the following estimate for z minus zh which is a straightforward application of the previousestimates in L2[0 T H1(Ω)] the approximation properties of Lemma 3237 of projections P loch Qloch
206 3 Approximation and Numerical Analysis
(see for instance [32 Theorem 46])
νz minus zhL2[0T H1(Ω)] le C(h+ τ12
) (zL2[0T H2(Ω)] + ztL2[0T L2(Ω)] + ψL2[0T H1(Ω)]
)
le C(h+ τ12)e1hL2[0T L2(Ω)] (3276)
We note that the lack of regularity on the right hand side restricts the rate of convergence to therate given by the lowest order scheme l ge 1 k = 0 even if high order schemes (in time) are chosenSetting vh = e1h into (3275) and using the fact that
int tntnminus1 b(e1h ψh)dt = 0 we obtain
minus(znh+ en1h) +
int tn
tnminus1(zh e1ht) + a(e1h zh)dt+ (znminus1
h+ enminus11h+) =
int tn
tnminus1e1h2L2(Ω)dt
Integrating by parts in time we deduce
minus(znh+ en1h) + (znh en1h) +
int tn
tnminus1
(minus (zht e1h) + a(zh e1h)
)dt =
int tn
tnminus1e1h2L2(Ω)dt (3277)
Setting vh = zh into (3269) and using e = ep + e1h the definition of projection Qloch of Definition3236 and the fact that
int tntnminus1 b(zh pminus p1h)dt =
int tntnminus1 b(zh pminus qh)dt we obtain
(en1h znh ) +int tn
tnminus1
(minus (e1h zht) + a(e1h zh)
)dtminus (enminus1
1h znminus1h+ ) = minus
int tn
tnminus1
(a(ep zh) + b(zh pminus qh))dt
(3278)
Here we have also used the fact that the definition of projection Qloch of Definition 3236 impliesthat (enp znh ) = 0
int tntnminus1(ep vht)dt = 0 and (enminus1
p znminus1h+ ) = 0 Using (3277) to replace the first three
terms of (3278) we arrive to
(znh+ en1h)minus (enminus1
1h znminus1h+ ) +
int tn
tnminus1e1h2L2(Ω)dt = minus
int tn
tnminus1
(a(ep zh) + b(zh pminus qh))dt
= minusint tn
tnminus1
(a(ep zh minus z) + a(ep z) + b(zh minus z pminus qh)
)dt
= minusint tn
tnminus1
(a(ep zh minus z) + ν(ep∆z) + b(zh minus z pminus qh)
)dt
where at the last two equalities we have used integration by parts (in space) and the incompressibilityconstraint which implies that
int tntnminus1 b(z pminus qh)dt = 0 Therefore
int tn
tnminus1e1h2L2(Ω)dt+ (znh+ e
n1h)minus (enminus1
1h znminus1h+ ) le
int tn
tnminus1ν(zh minus zH1(Ω)epH1(Ω)dt
+int tn
tnminus1
(epL2(Ω)∆zL2(Ω) + z minus zhH1(Ω)pminus qhL2(Ω)
)dt
Then summing the above inequalities and using the fact that φN+ equiv 0 and e01hminus = 0 (by definition )
and rearranging terms we obtain
(12)e1h2L2[0T L2(Ω)] le C(νepL2[0T L2(Ω)]zL2[0T H2(Ω)]
+νzh minus zL2[0T H1(Ω)](epL2[0T H1(Ω)] + (1ν)pminus qhL2[0T L2(Ω)]
)
le C(νepL2[0T L2(Ω)]e1hL2[0T L2(Ω)]
+(1ν)(h+ τ12)e1hL2[0T L2(Ω)](epL2[0T H1(Ω)] + (1ν)pminus qhL2[0T L2(Ω)]
))
Here we have used the Cauchy-Schwarz inequality the stability bounds of dual equation (3274) ie
32 Error estimates 207
and the error estimates (3276) on zh minus z Finally the estimate onrL2[0T L2(Ω)] follows by using a similar duality argument
Remark 3240 The combination of the last two Theorems implies the ldquosymmetric regularity freerdquostructure of our estimate In particular suppose that the initial data y0 isinW(Ω) and the forcing termf isin L2[0 T Hminus1(Ω)] and we define the natural energy norm|(v1 v2)|WS(0T ) equiv v1WS(0T ) + v2WS(0T ) endowed by the weak formulation Then the es-timate under minimal regularity assumptions can be written as follows
|(e r)|WS(0T ) le C(|(ep rp)|WS(0T ) + pminus qhL2[0T L2(Ω)] + φminus qhL2[0T L2(Ω)])
The above estimate indicates that the error is as good as the approximation properties enablesit to be under the natural parabolic regularity assumptions and it can be viewed as the fully-discrete analogue of Ceacutearsquos Lemma Ceacutea see eg [34] Hence the rates of convergence for e rdepend only on the approximation and regularity results via the projection error ep as indicatedin Lemma 3237 and Remark 3238 For example if the Taylor-Hood element is being used andy isin L2[0 T V(Ω)] capH1[0 T Hminus1(Ω)] p isin L2[0 T L2
0(Ω)] then for τ le Ch2 we obtain that
1 epL2[0T H1(Ω)] le C pminus qhL2[0T L2(Ω)] le C
2 epL2[0T L2(Ω)] le ChyL2[0T H1(Ω)] + τ12ytL2[0T Hminus1(Ω)]
Therefore the above estimates and Theorem 3239 imply eL2[0T L2(Ω)] asymp O(h) for τ le Ch2Obviously the estimate of Theorem 3239 is applicable even in case more regular solutions Forexample for smooth solutions the Taylor-Hood element combined with the dG time-stepping schemeof order k will allow the following rates
1 epL2[0T H1(Ω)] le C(h2 + τk+1)
2 epL2[0T L2(Ω)] le C(h3 + τk+1)
Thus Theorem 3239 implies that for τ le Ch2
eL2[0T H1(Ω)] asymp O(h2 + τk+1)
rL2[0T H1(Ω)] asymp O(h2 + τk+1)
eL2[0T L2(Ω)] asymp O(h3 + τk+1)
rL2[0T L2(Ω)] asymp O(h3 + τk+1)
3242 Symmetric estimates for the optimality system
It remains to compare the discrete optimality system (3122)-(3123)-(3120) to the auxiliary system(3266)-(3267)
Lemma 3241 Let (yh ph) (microh φh) (wh p1h) (zh φ1h) isin Uh timesQh be the solutions the discreteoptimality system (3122)-(3123)-(3120) and of the auxiliary system (3266)-(3267) respectivelyDenote by e equiv y minuswh r equiv microminus zh and let e2h equiv wh minus yh r2h equiv zh minus microh Then there exists algebraicconstant C gt 0 such that
e2hL2[0T L2(Ω)] + (1α12)r2hL2[0T L2(Ω)] le C(1α12)rL2[0T L2(Ω)]
208 3 Approximation and Numerical Analysis
In addition the following estimates holds Then the following estimate holds
eN2h2L2(Ω) +Nminus1sum
i=0[ei2h]2L2(Ω) + ν
int T
0e2h2H1(Ω)dt le (Cα32)
int tn
tnminus1r2L2(Ω))dt
r02h+2L2(Ω) +
Nsum
i=1[ri2h]2L2(Ω) + ν
int T
0r2h2H1(Ω)dt le (Cα12)
int T
0r2L2(Ω)dt
where C is constant depending only upon Ω
Proof Subtracting (3123) from (3267) we obtain the equation For n = 1 N vh isin Pk[tnminus1 tnYh]qh isin Pk[tnminus1 tnQh]
minus(rn2h+ vn) + (rnminus1
2h+ vnminus1+ ) +
int tn
tnminus1
(〈r2h vht〉+ a(r2h vh) + b(vh φ1h minus φh)
)dt =
int tn
tnminus1(e2h vh)dt
int tn
tnminus1b(r2h qh)dt = 0
(3279)Subtracting (3122) from (3266) and using (2318)-(3120) we obtain For n = 1 N for allvh isin Pk[tnminus1 tnYh] qh isin Pk[tnminus1 tnQh]
(en2h vn) +int tn
tnminus1
(minus 〈e2h vht〉+ a(e2h vh) + b(vh p1h minus ph)
)dt
= (enminus12h vnminus1
+ ) +int tn
tnminus1minus(1α)(microminus microh vh)dt
int tn
tnminus1b(e2h qh)dt = 0
(3280)
We set vh = e2h into (3279) and note thatint tntnminus1 b(e2h φ1h minus φh)dt = 0 to obtain
minus(rn2h+ en2h) +
int tn
tnminus1
(〈r2h e2ht〉+ a(r2h e2h)
)dt+ (rnminus1
2h+ enminus12h+) =
int tn
tnminus1e2h2L2(Ω)dt (3281)
Setting vh = r2h into (3280) and notingint tntnminus1 b(r2h p1h minus ph)dt = 0 we deduce
(en2h rn2h) +int tn
tnminus1
(minus 〈e2h r2ht〉+ a(e2h r2h)
)dtminus (enminus1
2h rnminus12h+)
=int tn
tnminus1
(minus (1α)〈r r2h〉 minus (1α)r2h2L2(Ω)
)dt (3282)
Integrating by parts with respect to time in (3282) and subtracting the resulting equation from(3281) we arrive to
(rn2h+ en2h)minus (enminus1
2h rnminus12h+) +
int tn
tnminus1
(e2h2L2(Ω) + (1α)r2h2L2(Ω)
)dt = minus(1α)
int tn
tnminus1〈r r2h〉dt
(3283)
Using Youngrsquos inequality to bound the right hand side adding the resulting inequalities from 1 to N and noting that
sumNn=1
((rn2h+ e
n2h)minus (enminus1
2h rnminus12h+)
)= 0 (since e0
2h equiv 0 rN2h+ = 0) we obtain the firstestimate For the second estimate we simply set vh = e2h into (3280) and use the previous estimateon r2h Finally the third estimate easily follows by setting vh = r2h into (3279) the estimate one2hL2[0T L2(Ω)] and standard algebra
Various estimates can be derived using results of Theorem 3239 and Lemma 3241 and standard
32 Error estimates 209
approximation theory results We begin by stating an almost symmetric error estimates which canbe viewed as the analogue of the classical Ceacutearsquos Lemma
Theorem 3242 Let (yh ph) (microh φh) isin Uh timesQh and (y p) (micro φ) isinWS(0 T )times L2[0 T L20(Ω)]
denote the approximate solutions of the discrete and continuous optimality systems (3122)-(3123)-(3120) and (2316)-(2317)-(2318) respectively Let ep = y minusQloch y rp = microminus P loch micro denote theprojection error where P loch Qloch defined in Definition of 3235 and 3236 respectively Then thefollowing estimate holds for the error e = y minus yh and r = microminus microh
|(e r)|WS(0T ) le C(1α32)(|(ep rp)|WS(0T ) + pminus qhL2[0T L2(Ω)] + φminus qhL2[0T L2(Ω)])
where C depends upon constants of Theorem 3239 and Lemma 3241 1ν2 and is independent ofτ h α and qh isin Qh arbitrary
Proof First we observe that an estimate for e2hLinfin[0T L2(Ω)] and r2hLinfin[0T L2(Ω)] can be derivedidentical to [32 Theorem 46] since the (3278)-(3279) are uncoupled due to the estimate of Lemma3241 Therefore the estimate follows by using triangle inequality and previous estimates of Theorem3239 and Lemma 3241
An improved estimate for the L2[0 T L2(Ω)] norm for the state and adjoint follow by combining theestimates of Theorem 3239 and the first estimate of Lemma 3241
Theorem 3243 Suppose that y0 isinW(Ω) f isin L2[0 T Hminus1(Ω)] and the assumptions of Theorem3239 and Lemma 3241 hold Let ep = yminusQloch y rp = microminusP loch micro denote the projection error whereP loch Qloch defined in Definition of 3235 and 3236 respectively Then there exists a constant Cdepending upon Ω 1ν such that
eL2[0T L2(Ω)] le C(1α12)(epL2[0T L2(Ω)] + rpL2[0T L2(Ω)]
+τ12(epL2[0T H1(Ω)] + pminus qhL2[0T L2(Ω)])
+τ12(rpL2[0T H1(Ω)] + φminus qhL2[0T L2(Ω)]))
rL2[0T L2(Ω)] le C(epL2[0T L2(Ω)] + rpL2[0T L2(Ω)]
+τ12(epL2[0T H1(Ω)] + pminus qhL2[0T L2(Ω)])
+τ12(rpL2[0T H1(Ω)] + φminus qhL2[0T L2(Ω)]))
Proof The first estimate follows by using triangle inequality and previous estimates of Theorem3239 and Lemma 3241
We close this subsection by stating convergence rates in two cases for the Taylor-Hood elementdepending on the available regularity Obviously a variety of other estimates can be derived dependingon the chosen elements
Proposition 3244 Suppose that the assumptions of Theorem 3239 and Lemma 3241 hold1) Let y0 isin W(Ω) f isin L2[0 T Hminus1(Ω)] and there exists p isin L2[0 T L2
0(Ω)] such that the weakformulation (2320) is valid Assume that the Taylor-Hood element are being used to constructthe subspaces and piecewise constants polynomials k = 0 for the temporal discretization Then forτ le Ch2 we obtain
eL2[0T L2(Ω)] le Ch and rL2[0T L2(Ω)] le Ch
2) Let y micro isin L2[0 T H3(Ω) capV(Ω)] capHk+1[0 T H1(Ω)] p φ isin L2[0 T H2(Ω) cap L20(Ω) Suppose
that the Taylor-Hood element combined with piecewise polynomials of degree k for the temporal
210 3 Approximation and Numerical Analysis
discretization are being used then the following rates hold
(e r)W (0T ) le C(1α32)(h2 + τk+1)
eL2[0T L2(Ω)] le C(1α12)(h3 + τk+1 + τ12(h2 + τk+1)
)
rL2[0T L2(Ω)] le C(h3 + τk+1 + τ12(h2 + τk+1)
)
Proof The rates directly follow from Theorem 3239 Theorem 3243 Lemma 3237 and Remark3240
3243 Control Constraints The variational discretization approach
We demonstrate that the variational discretization approach of Hinze ([65]) can be used within ourframework In the variational discretization approach the control is not discretized explicitly andin particular we define Adad equiv Aad Thus our discrete optimal control problem now coincides toMinimize functional
Jh(yh(g) g) =int T
0yh(g)minus yd2L2(Ω)dt+ α
int T
0g2L2(Ω)dt
subject to (312) where yh(g) isin Uh denotes the solution of (312) with right hand side given controlg isin L2[0 T L2(Ω)] The optimal control (abusing the notation denoted again by gh) satisfies thefollowing first order optimality condition
Jprimeh(gh)(uminus gh) ge 0 for all u isin L2[0 T L2(Ω)]
where gh takes the form gh = Proj[gagb](minus 1α microh(gh)) similar to continuous case We note that the
gh is not in general a finite element function corresponding to our finite element mesh Thus itsalgorithmic construction requires extra care (see eg [65]) However in most cases the quantityof interest is the state variable and not the control For the second derivative we easily obtain anestimate independent of g gh and in particular
Jprimeprimeh (u)(u u) ge αu2L2[0T L2(Ω)] for all u isin L2[0 T L2(Ω)]
Theorem 3245 Let y0 isin W(Ω) f isin L2[0 T Hminus1(Ω)] and yd isin L2[0 T L2(Ω)] and the thereexists an associated pressure p isin L2[0 T L2
0(Ω)] Suppose that Adad equiv Aad and let g gh denote thesolutions of the corresponding continuous and discrete optimal control problems Then the followingestimate hold
g minus ghL2[0T L2(Ω)] le C(1α)micro(g)minus microh(g)L2[0T L2(Ω)]
le C(epL2[0T L2(Ω)] + rpL2[0T L2(Ω)]
+τ12(epL2[0T H1(Ω)] + pminus qhL2[0T L2(Ω)])+τ12(rpL2[0T H1(Ω)] + φminus qhL2[0T L2(Ω)])
where (microh(g) φh(g)) and (micro(g) φ) denote the solutions of (3119) and (2315) respectively andep equiv y(g) minus Qloch y(g) rp = micro(g) minus P loch micro(g) the corresponding projection errors Furthermore ifτ le Ch2
g minus ghL2[0T L2(Ω)] le Ch
Proof We note that Adad equiv Aad and hence the first order necessary conditions imply that
Jprimeh(gh)(g minus gh) ge 0 and J
prime(g)(g minus gh) le 0 (3284)
32 Error estimates 211
Therefore using the second order condition and the mean value theorem we obtain for any u isinL2[0 T L2(Ω)] (and hence for the one resulting from the mean value theorem) and inequalities(3284)
αg minus gh2L2[0T L2(Ω)] le Jprimeprimeh (u)(g minus gh g minus gh) = J
primeh(g)(g minus gh)minus J primeh(gh)(g minus gh)
le J primeh(g)(g minus gh)minus J prime(g)(g minus gh) =int T
0
int
Ω(micro(g)minus microh(g))(g minus gh)dxdt
le Cmicro(g)minus microh(g)L2[0T L2(Ω)]g minus ghL2[0T L2(Ω)]
which clearly implies the first estimate Now a rate of convergence can be obtained using similararguments to Theorem 3239 Indeed note that subtracting (3119) from (2315) and settingr = microh(g) minus micro(g) and e = yh(g) minus y(g) Using the estimates of Theorem 3239 and the rates ofProposition 3244 we obtain the desired estimate after noting the reduced regularity of e
After studying the convergence rates in the relevant norms for each of the studied problems in thefollowing chapters we describe the corresponding experimental results and verify the correspondingtheoretical results
4Robin Boundary Control
Experiment in Linear ParabolicPdes
This chapter presents the theoretical principles and the experimental results for a boundary controlproblem for linear parabolic partial differential equations with Robin boundary conditions
Contents41 Robin boundary conditions setting the model 214
411 Smooth initial data 214412 Nonsmooth initial data 219413 Experiment using linear polynomials in space and time 219
214 4 Robin Boundary Control Experiment in Linear Parabolic Pdes
41 Robin boundary conditions setting the model
According to the theory in previous chapters related to Robin boundary control problem we want tominimize the functional
J(y g) = 12
int T
0y minus yd2L2(Ω)dt+ α
2
int T
0g2L2(Γ)dt
with constraints
yt minus∆y = f in (0 T )times Ω
y + λminus1 party
partn = g in (0 T )times Γ (411)
y(0 x) = y0 in Ω
We consider numerical examples for the model problem in space Ωtimes I = Ωtimes [0 T ] = [0 1]2 times [0 01]in cases of
a) Smooth initial data for the state variable (with known analytical solution) using constant polyno-mials in time and linear polynomials in space
b) Discontinuous initial data y0 isin L2(Ω) - in this case we mention that we have not known analyticalsolution and we consider exact solution the solution to space - time mesh dt = 271267eminus 05h = 520833eminus 03 (3687 and 37249 degrees of freedom respectively) and
c) Smooth initial data for the state variable (with known analytical solution) using linear polynomialsin time and space
Note that the boundary control function does not have continuous first derivatives at certain points
We stabilize the regularization parameter of the functional with α = πminus4 The boundary optimalcontrol problem is solved with the software package FreeFem ++ see eg [64] using a gradientalgorithm to a 4 Six-Core AMD Opteron (tm) Processor 8431 96 GB RAM computer
411 Smooth initial data
Let a = minusradic
5 We choose force
f(t x1 x2) = π2eaπ2t(
2(x22 minus x2 + x2
1) cos(πx1x2) cos(πx1(x2 minus 1))
minus(2x22 minus 2x2 + 2x2
1 + a+ 1)sin(πx1x2) sin(πx1(x2 minus 1)))
initial data y0(x1 x2) = sin(π(1 + x1x2))sin(πx1(x2 minus 1)) with optimal pair (y g) and
y(t x1 x2) = exp(aπ2t)sin(π(1 + x1x2))sin(πx1(x2 minus 1))
while g has been calculated using the Robin boundary condition in each component of the square Γi
41 Robin boundary conditions setting the model 215
i = 1 4 (starting from the bottom) side of the boundary with
g(t x1 x2) = eπ2at
0 in Γ1
πx2 sin(πx2 minus π) + π(1minus x2) sin(πx2)cos(π (x2 minus 1)) in Γ2
0 in Γ3
0 in Γ4
For this data option and target function yd(t x1 x2) = 05 the corresponding errors for the statevariable and the control function for different meshes are shown in Table 41
Table 41 Rates of Convergence for the two-dimensional solution with k = 0 tau = h22 smooth initialdata and yd = 05
Descritization Errorsτ = h22 eL2[0T L2(Ω)] eL2[0T H1(Ω)] J(y g)
h = 02357022 0018310605 0070340370 0002395820h = 01178511 0004085497 0031958661 0001857961h = 00589255 0001335615 0016375314 0001738954h = 00294627 0000766443 0008819160 0001711876h = 00147313 0000676697 0005626214 0001705198
Rates 1526118558 0998546583 -
The convergence rates we can see above is according to the theory and equal to 15 for L2[0 T L2(Ω)]norm and 1 for L2[0 T H1(Ω)] norm (O(τ + h32) and O(τ + h) respectively in accordance withtheoretical results of Proposition 3213) In particular the convergence rate 15 for L2[0 T L2(Ω)]norm is the best we can get with these boundary data since from the projection definition is theL2[0 T L2(Γ)] norm that limits the size of the convergence rate on the boundary So instead of havingconvergence rate 2 as we have in the distributed control with zero Dirichlet boundary conditions theconvergence rate decreases in value 15
Similar results have been obtained for target functions 0 and 05 cos(πx1) cos(πx2) More specificallyobserving the results shown in Tables 41 42 43 for different target functions we can see almostthe same convergence rates for the state variable errors in spaces L2[0 T L2(Ω)] and L2[0 T H1(Ω)]and similar values for the functional
Table 42 Convergence rates for the 2d solution with k = 0 τ = h22 smooth initial data and yd = 0
Discretization Errorτ = h22 eL2[0T L2(Ω)] eL2[0T H1(Ω)] J(y g)
h = 02357022 0018437187 0070206813 0003505277h = 01178511 0004163875 0036356131 0002718328h = 00589255 0001477032 0017039099 0002520912h = 00294627 0000961147 0010077840 0002473947h = 00147313 0000883837 0007476681 0002462163
Rate 1420572191 0875175799 -
A 3d Figure 41 shows from a different view how the errors vary in spaces L2[0 T H1(Ω)] andL2[0 T L2(Ω)] as τ h change In particular starting with h = 02350722 and τ = 005555449we have relatively large errors for the L2[0 T H1(Ω)] norm error and enough smaller for theL2[0 T L2(Ω)] 0070 and 0018 respectively
As the experiment progresses the errors are reduced until they become 00056 and 000067 respectivelywhere at this point they begin to stabilize because of the very dense spatial and temporal discretization
216 4 Robin Boundary Control Experiment in Linear Parabolic Pdes
Table 43 Convergence rates for the 2d solution with k = 0 τ = h22 smooth initial data and yd =05 cos(πx1) cos(πx2)
Discretization Errorτ = h22 eL2[0T L2(Ω)] eL2[0T H1(Ω)] J(y g)
h = 02357022 0018033381 0070977749 0004957926h = 01178511 0003666894 0032317405 0004953116h = 00589255 0001015930 0016629768 0004905743h = 00294627 0000821597 0009086474 0004909695h = 00147313 0000879346 0005954120 0004907448
Rates 1485364815 0988524738 -
Figure 41 Errors for the state and control variable for τ = h22
and therefore the integration and rounding errors In the above graph it is clear also that the errorsfor the control function stabilized faster since the gradient algorithm ldquoworkrdquo more in the early stepsto have a desired control A 2d Figure 42 shows how g(t)L2(Ω) norm for the control functionvaries as time passes in τ h different meshes The left Figure of 43 shows how the distance from thetarget y(t)minus yd(t)L2(Ω) varies as time passes in different meshes and more particularly the moredense mesh we use the smaller distance from the target we get
Figure 42 Norm for the control function g(t)L2(Ω)
41 Robin boundary conditions setting the model 217
Figure 43 Distance from target y(t)minus yd(t)L2(Ω) a) Smooth data b) Nonsmooth data - disconti-nuity
Figure 44 Effects to the control g(t)L2(Ω) as regularization parameter α varies with fixed mesh48times 48
Effects to the functional as regularization parameter α changes Figure 44 shows that for smallvalues of α gradient method uses big control values and vise versa small control for big values for αWe also noted that itrsquos better to take 10minus1 lt α lt 10minus5
Distance between numerical solution and target function An important observation is that wedidnrsquot notice change in the progress of the distance of the numerical solution from the target for thedifferent values of alpha as shown in Figure 45
The algorithm for piecewise constant polynomials in time For the above results we used thefollowing code after we initialized n = 0 ε = 1 tolerance tol and initial control g0|Γ We note thateg yn is a sequence of piecewise linear polynomials in time (and every term of this sequence isanother sequence piecewise in space) in nth iteration of the gradient method
bull Step 0 (Initial state) For g|Γ = g0|Γ y = y0 solve the system
yt minus∆y = f
y|Γ + λminus1 party
partn = g|Γy(0 x) = y0
218 4 Robin Boundary Control Experiment in Linear Parabolic Pdes
Figure 45 Effects to the numerical solution and target function distance y(t) minus yd(t)L2(Ω) as αvaries
bull Step 1 (Conjugate equation) Solve for micro = micron
microt + ∆micro = y minus yd
micro|Γ + λminus1 partmicro
partn = 0
micro(T x) = 0
bull Step 2 (New descent direction) Choose as descent direction the negative gradient of the costfunctional
minusJ prime(g|Γ) = minus(αg|Γ + micro|Γ)
bull Step 3 (Checking step εn) Find optimal size of εn
J(gn|Γ + εn(αg|Γ + micro|Γ)
)= min
εgt0J(gn|Γ + ε(αg|Γ + micro|Γ)
)
bull Step 4 (New control function) Set
gn+1|Γ = gn|Γ + εn(αgn|Γ + micron|Γ)
bull Step 5 (New state) Check if Jn le Jnminus1 and set ε = 15ε If Jn ge Jnminus1 set ε = 05ε Go toStep 0 with g|Γ = gn+1|Γ for y = yn and n = n+ 1 Stop if |Jn minus Jnminus1|Jn le tol
Please note that for the solution of the state equation you need to write the basic equation in suitablediscontinuous in time Galerkin form Specifically the approximation functions are piecewise constantpolynomials in time so the method turns to the modified backward Euler (method dG0)
(I + dtA)yi+1 + yi+1|Γ = yi + gi+1|Γ +int ti+1
ti
fds
Similarly for the solution of the conjugate equation we need to write the backward in time equationin the form
(I + dtA)microi + microi|Γ = microi+1 +int ti+1
ti
(yi minus yd)ds
Where operator A corresponds to the Laplace operator
41 Robin boundary conditions setting the model 219
412 Nonsmooth initial data
This experiment has the same Ω T as in the first example eg Ω = [0 1] times [0 1] T = 01 Thedifference is that the initial data y0 is a discontinuous function defined by
y0 =sin(π(1 + x1x2))sin(πx1(x2 minus 1)) if x1 x2 ge 05
10 + sin(π(1 + x1x2))sin(πx1(x2 minus 1)) else
For this experiment the error results are shown in the Table 44 where the rate of O(h) when τ le Ch2for the L2[0 T L2(Ω)] norm is verified for the state and adjoint variable Comparing the convergencerate results and the expected convergence rate we see better rates because of the way of the ldquoexactsolutionrdquo construction Comparing also this example with the smooth data example we observe thatthe functional has bigger values and the error eg in h = 0014 is also larger The results give a little
Table 44 Convergence rates for the 2-d solution with k = 0 τ = h22 and nonsmooth initial data
Discretization Errorsτ = h22 eL2[0T L2(Ω)] rL2[0T L2(Ω)] J(y g)
h = 02357022 04093275092 0008552165422 09411555956h = 01178511 01555909764 0005056762072 08225865966h = 00589255 00714820269 0002440981965 07424795375h = 00294627 00302970740 0001179518135 07066657202h = 001473139 00100448501 0001097951813 06883517113
Rate 12520017243 0952697386266 -
bit better rate of convergence due to the constructive way of the state variable Obviously the errornorm L2[0 T H1(Ω)] doesnrsquot give a rate since the data y0 isin L2(Ω) and the initial discontinuity isdisseminated through characteristics in the whole exact solution Finally the right graph in Figure43 shows how the distance from target function reduces as time evolves and as we expected it ismore difficult the state variable to reach the target (under the control function effect)
413 Experiment using linear polynomials in space and time
To illustrate the potential applicability of higher order time stepping schemes we consider a coarsetime-stepping approach based on the k = 1 time stepping scheme Here we return to the Example 411with the known smooth solution y given by y(t x1 x2) = exp(aπ2t)sin(π(1 +x1x2))sin(πx1(x2minus 1))for k = 1 l = 1 Note that despite the fact that we have chosen smooth state variable the presence ofa Robin boundary control limits the regularity at least near by the boundary for the time derivativeof the adjoint and control variables However overall we expect that the parabolic regularity willappear as time progresses Our best approximation type estimates for ldquosmoothrdquo state adjoint andcontrol variables yield a convergence rate with respect to L2[0 T H1(Ω)] norm of order O(τ2 + h)when piecewise linears are considered for both time and space ie k = 1 l = 1
In the following experiments we present the rate based on a coarse time stepping approach Inparticular for τ = h12 and τ = h34 which corresponds to very few time steps compared to thestandard approaches the Tables 45 46 clearly indicate that we still obtain a rate of almost O(h)Of course it is expected that the rate is suboptimal due to the lack of smoothness near the boundaryPlease note that for the solution of the state equation you need to write the state equation in suitable
220 4 Robin Boundary Control Experiment in Linear Parabolic Pdes
Table 45 Convergence rates for the 2-d solution with k = 1 l = 1 τ = O(h34) smooth initial data andyd = 0
Discretization Errorτ = h3416 eL2[0T L2(Ω)] eL2[0T H1(Ω)] J(y g)h = 02357022 0007064919 0071348872 0002392313h = 01178511 0002639725 0031653985 0002355530h = 00589255 0001462584 0017397858 0002305098h = 00294627 0000873854 0009497292 0002258746h = 00147313 0000566631 0005500319 0002230101h = 00073656 0000410072 0003614028 0002214837
Rate 0910047586 0924325857 -
Table 46 Convergence rates for the 2-d solution with k = 1 l = 1 τ = O(h12) smooth initial data andyd = 0
Discretization Errorτ = h1216 eL2[0T L2(Ω)] eL2[0T H1(Ω)] J(y g)h = 02357022 0008385394 0068070558 0002676642h = 01178511 0004769310 0040332082 0002579619h = 00589255 0002736129 0019010050 0002468955h = 00294627 0001954915 0012117836 0002384007h = 00147313 0001398719 0008222888 0002322462h = 00073656 0001003904 0005980212 0002276926
Rate 0645943041 0762328463 -
discontinuous in time Galerkin form dG1 Specifically the approximation functions are piecewiselinear polynomials in time and space so the method turns to
(ynminus1h+ vnminus1
h+ ) +int tn
tnminus1
(minus 〈yht vh〉+ a(yh vh) + λ〈yh vh〉Γ
)dt
= (ynminus1h vnminus1
h+ ) +int tn
tnminus1
(〈f vh〉+ λ〈g vh〉Γ
)dtforallvh isin Pk[tnminus1 tnUh] 1 le n le N
y0 = y0
So if k = 1 eg linear polynomial in time t we can write yh(t) = Y n0 + Y n1 (tminus tnminus1)τ in (tnminus1 tn]with τ = tnminus tnminus1 and for test function vh = τminusl(sminus tnminus1)l for l = 0 1 after integration and denotingY n0 = Y0 Y n1 = Y1 we take the appropriate system (see the following algorithm) with yi = Y0 + Y1Similarly for the solution of the conjugate equation we need to write the backward in time equationwith microi = M0 +M1
The algorithm for piecewise linear polynomial in time Working similarly with the previousalgorithm for the above results we used the following code after initializing n = 0 ε = 1 tolerancetol and the control function g0|Γ We note that eg yn is a sequence of piecewise linear polynomialsin time (and every term of this sequence is another sequence piecewise in space) in nth iteration ofthe gradient method
41 Robin boundary conditions setting the model 221
bull Step 0 (Initial state) For g|Γ = g0|Γ solve the system y = y0
(I + dtA)Y0 + (I + 12dtA)Y1 + (Y0 + 1
2Y1)|Γ = gi+1|Γ + yi +int ti+1
ti
fds
12dtAY0 + (1
2I + 13dtA)Y1 + (1
2Y0 + 13Y1)|Γ = 1
2gi+1|Γ + 1dt
int ti+1
ti
(sminus ti)fds
with y = Y0 + Y1
bull Step 1 (Conjugate equation) Solve for micro = micron
(I + dtA)M0 + (I + 12dtA)M1 + (M0 + 1
2M1)|Γ = microi +int ti+1
ti
(yi minus yd)ds
12dtAM0 + (1
2I + 13dtA)M1 + (1
2M0 + 13M1)|Γ = 1
dt
int ti+1
ti
(yi minus yd)(sminus ti)ds
with micro = M0 +M1
bull Step 2 (New descent direction) Choose as descent direction the negative gradient of the costfunctional
minusJ prime(g|Γ) = minus(αg|Γ + micro|Γ)
bull Step 3 (Checking step εn) Find optimal size of εn
J(gn|Γ + εn(αg|Γ + micro|Γ)
)= min
εgt0J(gn|Γ + ε(αg|Γ + micro|Γ)
)
bull Step 4 (New control) Set
gn+1|Γ = gn|Γ + εn(αgn|Γ + micron|Γ)
bull Step 5 (New state) Check if Jn le Jnminus1 and set ε = 15ε If Jn ge Jnminus1 set ε = 05ε Go toStep 0 with g|Γ = gn+1|Γ for y = yn and n = n+ 1 Stop if |Jn minus Jnminus1|Jn le tol
Remark 411 This gradient method is based on the steepest-descentprojected gradient methodIts convergence is slow but it is easy to implement and so suitable for numerical exams Alsobecause evolutionary problems require great computational effort because of the time change gradientmethods are very useful alternatives methods with them with higher convergence order requiring lesscomputational resources The projection step εn is necessary since the term gi
n + εn(γigi + microi) itmay not be advisable In particular the reduced negative slope is used as search direction and thenwe calculate the step in this direction The step εn is derived from a suitable linear search strategy(line search strategy) A typical gradient method has good prospect to lead to the solution in the firstiterations while decreasing their effectiveness in subsequent iterations However in the next section(distributed control case in a semilinear parabolic problem) we will improve the code by using StrongWolfe-Powel conditions and instead of negative derivative directions we will use the Fletcher-Reevesdirection
Remark 412 In Figures 46 47 we present some instances for the state and conjugate variable
Remark 413 It is reasonable to compare these results to that with smooth data in case of k = 0and k = 1 Specifically let us compare for example the results of Tables 43 and 45 We note thatalthough the convergence order is much smaller for the case k = 1 errors eg for h = 0014 althoughremain the same for the L2[0 T H1(Ω)] and approximately equal to 0005 they are smaller for theL2[0 T L2(Ω)] norm and equal to 0008 for k = 0 and 0005 for k = 1 ie we have better resultsIt is also noteworthy that in the case k = 1 due to coarse - time stepping although we use the samePC memory we can continue to more dense partitioning This is possible since we have used less
222 4 Robin Boundary Control Experiment in Linear Parabolic Pdes
Figure 46 Instance of the state variable
Figure 47 Instance of the conjugate variable
41 Robin boundary conditions setting the model 223
data storage memory size since time points is much less and it plays a crucial role in computermemory allocation So we can continue into more dense partitioning which allows us to take evenbetter results for the error norm L2[0 T H1(Ω)] from 0005 for k = 0 to 00036 for k = 1 and forthe L2[0 T L2(Ω)] norm from 0008 if k = 0 to 00004 if k = 1 Finally note that for the minimizedfunctional J from 00049 (in the case of k = 0) we achieve much lower value of the functional in thecase k = 1 and equal to 00022
Next we report that the degrees of freedom in the case of coarse time stepping example k = 1τ = O(h34) (see Table 45) for every variable of the 5 system variables - 2 for the state 2 for theconjugate problem and 1 for the control is
bull for the spatial part the degrees of freedom we use successively in each mesh are 49 169 6252401 9409 37249 (148225)
bull for the time part the degrees of freedom we use successively in each mesh are 5 8 14 23 3864 (108)
while in the case of k = 0 τ = O(h2) for every variable of the 3 system variables
bull for the spatial part the degrees of freedom we use successively in each mesh are 49 169 6252401 9409 (37249)
bull for the time part the degrees of freedom we use successively in each mesh are 4 15 58 231922 (3687)
5Distributed Control Experiment
In Semilinear Parabolic PdesThis chapter presents the basic concepts and the numerical results for a semilinear parabolic equationwith distributed control and zero Dirichlet boundary
Contents51 Distributed control - Description of the model 226
511 Constant polynomials in time and linear in space 226512 Strong Wolfe-Powel conditions 227
226 5 Distributed Control Experiment In Semilinear Parabolic Pdes
51 Distributed control - Description of the model
We described in previous chapters the theory for semilinear problems with distributed control Nowwe will verify numerically the posteriori error estimates for k = 0 l = 1 in the cases of τ = h2 andτ = h for the error control state and conjugate variable and we will introduce the strong Wolfe-Powelconditions
We construct the following numerical example for the model problem with known exact solution inΩtimes (0 T ) = (0 1)2 times (0 01) and homogenous Dirichlet boundary condition similar with this in thework [94] Specifically we minimize the functional
J(y g) = 12
int T
0y minus yd2L2(Ω)dt+ α
2
int T
0g2L2(Ω)dt
subject toyt minus ∆y + (13)y3 = f + g in (0 T )times Ω
y = 0 on (0 T )times Γy(0 x) = y0 in Ω
We choose regularization parameter α = πminus4 force
f(t x1 x2) = minusπ4eminusradic
5π2T sin(πx1)sin(πx2) + 13( minus1
2minusradic
5π2eminus
radic5π2tsin(πx1)sin(πx2))3
target function
yd(t x1 x2) =(
2π2eminusradic
5π2T minus π4
(2minusradic
5)2
(eminusradic
5π2tsin(πx1)sin(πx2))2
(eminusradic
5π2t minus eminusradic
5π2T))
sin(πx1)sin(πx2)
and initial data y0(x1 x2) = minus12minusradic
5π2sin(πx1)sin(πx2) with a way that the optimal solution (y micro g)
to bey(t x1 x2) = minus1
2minusradic
5π2eminus
radic5π2tsin(πx1)sin(πx2)
micro(t x1 x2) = (eminusradic
5π2t minus eminusradic
5π2T )sin(πx1)sin(πx2)
g(t x1 x2) = minusπ4(eminusradic
5π2t minus eminusradic
5π2T )sin(πx1)sin(πx2)
511 Constant polynomials in time and linear in space
We used the following code after initializing n = 0 ε = 1 tolerance tol and control g0|Γ We notethat eg yn is a sequence of piecewise linear polynomials in time (and every term of this sequence isanother sequence piecewise in space) in nth iteration of the gradient method
bull Step 0 (Initial state) For g = g0 solve y = y0
yt minus∆y + 13y
3 = g + f
yΓ = 0y(0 x) = y0
51 Distributed control - Description of the model 227
bull Step 1 (Conjugate equation) Find micro = micron
microt + ∆micro+ y2micro = y minus ydmicroΓ = 0micro(T x) = 0
bull Step 2 (New descent direction) Choose as descent direction the negative gradient of the costfunctional
minusJ prime(g) = minus(αg + micro)
bull Step 3 (Checking step εn) Find optimal size of εn
J(gn + εn(αg + micro)
)= min
εgt0J(gn + ε(αg + micro)
)
bull Step 4 (New control function) Set
gn+1 = gn + εn(αgn + micron)
bull Step 5 (New state) Check if Jn le Jnminus1 then set ε = 15ε If Jn ge Jnminus1 set ε = 05ε Go toStep 0 with g = gn+1| for y = yn and n = n+ 1 Stop if |Jn minus Jnminus1|Jn le tol
Please note similarly with the previous chapter that for the solution of the state equation we need towrite the basic equation in suitable discontinuous in time Galerkin form
(I + dtA)yi+1 + 13yi+1
3 = yi +int ti+1
ti
(f + g)ds
and for the solution of the conjugate equation we need to write the backward in time equation in theform
(I + dtA)microi + y2i microi = microi+1 +
int ti+1
ti
(yi minus yd)ds
The semilinear term was handled with linerization and with fixed point method too and we sawsimilar results
Table 51 Convergence Rates for the 2-d solution in the case of k = 0 l = 1 (h = τ)
Discretization Errorh = τ eL2[0T H1
0 (Ω)] rL2[0T H10 (Ω)] g minus ghL2[0T L2(Ω)]
h = 002946280 3631050 005551130 002498330h = 001473140 1508560 002618430 001082740h = 000736570 0772711 001454260 000561528h = 000368285 0391391 000758848 000281426
Rate 1071233 095696566 105004366
512 Strong Wolfe-Powel conditions
As in previous chapter we use an algorithm based on steepest - descent projected gradient method The projection step εn is necessary since the direction term it might not be advisable
228 5 Distributed Control Experiment In Semilinear Parabolic Pdes
Table 52 Convergence Rates for the 2-d solution in the case of k = 0 l = 1 (h2 = τ)
Discretization Errorh2 = τ eL2[0T H1
0 (Ω)] rL2[0T H10 (Ω)] g minus ghL2[0T L2(Ω)]
h = 01178510 2254550 004141390 007661170h = 00589256 1003230 001943350 002208320h = 00294628 0470049 000914215 000546600h = 00147314 0229416 000445367 000135706
Rate 1051790 106430666 189617666
We used the Fletcher-Reeves conjugate direction as search direction and then we computed the stepfor this direction The step εn derived from a suitable linear search (line search strategy) Notethat the experiments in this paragraph (see Table 53) and specifically when k = 0 although we
Table 53 Convergence rate for the 2-d problem with k = 0 l = 1 (h2 = τ)
Discretization Errorh2 = τ eL2[0T H1
0 (Ω)] rL2[0T H10 (Ω)] g minus ghL2[0T L2(Ω)]
h = 01178510 2195070 00411142 0348617h = 00589256 0989756 00192208 0098052h = 00294628 0467749 00091017 0027175h = 00147314 0229123 00044466 0008308
Rate 1086690 10695966 1796943
wasted more computational resources in memory we were able to reduce significantly the number ofiterations of the double iteration loop of the gradient method from an average of 31 iterations to23 (keeping almost the same convergence classes and similar effects) using the strong Wolfe - Powelconditions
1 J(yk+1 gk+1) le J(yk gk) + σεkJprimeTkdk (Armijo rule)
2 |J primek+1dk| le minusρJ primekdkwith 0 lt ρ le σ lt 1 and dk+1 = minusJ primek+1 + βk+1dk d0 = minusJ primek and choosing the Fletcher-Reevesconjugate directions βk = JprimeTk Jprimek
Jprimekminus12
6Experiment for Stokes Equations
with Distributed controlThis chapter presents the basic concepts and the related test results for a distributed control problemin evolutionary Stokes equations with zero Dirichlet boundary conditions
Contents61 Distributed control in Stokes - description of the model 230
611 Smooth data 2316111 Time k = 0 and TaylorHood space discretization 2316112 Time k = 1 and TaylorHood space discretization 231
612 Rough initial data (discontinuity of y0 yd g) 2336121 Discretization without control constraints 2336122 Discretization with control constraints 234
230 6 Experiment for Stokes Equations with Distributed control
61 Distributed control in Stokes - description of the model
In this paragraph we examine the mathematical model and the theoretical rates of convergence whichexamined in previous chapters related to evolutionary Stokes with distributed control
The examples are based on [60 Section 3] The pressure and the velocity need to discretized insuitable finite element spaces with the necessary inf-sup conditions Such spaces include eg TaylorHood P2P1 elements for the space approximation of velocitypressure For the time approximationwe will use dG time stepping schemes k = 0 k = 1 eg piecewise constants and piecewise linearsrespectively Our example focus on the unconstrained and constrained control case where a classicalboot-strap argument implies smooth solutions for the state and adjoint variables for smooth and nonsmooth data
We consider numerical tests in the case k = 0 and some examples for the more difficult to computebut with better rates of convergence case of k = 1 for the model problem Our space is Ωtimes [0 T ] =[0 2]2 times [0 01] choosing y|Γ = 0 with known exact solution
y = (y1 y2) = ((cos(2kx)minus 1) sin(2my) sin(2mx)(1minus cos(2ky)))eminusνt2
p = eminusνt((sin(kx)2 sin(my)2)k2 + (cos(2kx)minus 1)2 sin(2my)2
+ sin(2mx)2(1minus cos(2ky))2)2g = (g1 g2)
where
g1 = ((((kν sin(kx)2 minus kν cos(kx)2 + kν) cos(my) sin(my) + ((minus8km2 minus 8k3) sin(kx)2
+(8km2 + 8k3) cos(kx)2 minus 8km2) cos(my) sin(my)))keminusνt2g2 = (((k2ν sin(2mx) cos(2ky)minus k2ν sin(2mx)) + (minus8k2m2 minus 8k4) sin(2mx) cos(2ky)
+8k2m2 sin(2mx)))(2k2))eminusνt2
initial velocity y0 = ((cos(2kx) minus 1) sin(2my) sin(2mx)(1 minus cos(2ky))) and target function yd =(yd1 yd2) = (05 05)
The force term f = (f1 f2) can easily computed from the state equation if we substitute the aboveexact solution to the equation and particularly
f1 = (((cos(kx) sin(kx) sin(my)2 + (16k2 cos(kx) sin(kx)3 + (16k2 cos(kx)minus16k2 cos(kx)3) sin(kx)) cos(my)2 sin(my)2 + ((16km cos(mx) sin(mx)3
minus16km cos(mx)3 sin(mx)) cos(ky)2 minus 8km cos(mx) sin(mx)3
+8km cos(mx)3 sin(mx)) sin(ky)2 + (8km cos(mx) sin(mx)3
minus8km cos(mx)3 sin(mx)) cos(ky)2 minus 8km cos(mx) sin(mx)3
+8km cos(mx)3 sin(mx)))k)eminusνtf2 = (((2m sin(kx)2 cos(my) sin(my) + (minus4k2m sin(2kx)2 minus 8k2m cos(2kx)
+8k2m) cos(2my) sin(2my) + (4k3 sin(2mx)2 minus 4k3 sin(2mx)2 cos(2ky)) sin(2ky)))(2k2))eminusνt
For the velocity we expect O(h3 + τk+1) and O(h2 + τk+1) rates of convergence in L2[0 T L2(Ω)]and L2[0 T H1(Ω)] norms respectively
We choose constant regularisation parameter in the functional α = 10minus4 and the free parameterssimilar to [32] ν = 1 k = π m = π and λ = 1 The optimal control problem is solved with the
61 Distributed control in Stokes - description of the model 231
finite elements software package FreeFem++ see eg [64] using a gradient algorithm for the controlfunction
611 Smooth data
In this section we study the case of smooth initial data and we know the (exact) optimal solutionWe choose a larger step h = 047 comparing to the previous examples because of the bigger Ω (squarewith edge 2) so it is allowed to take such big step In the end of this chapter we will show the relateddegrees of freedom
All the examples present the expected -due to theory- rates of convergence In general it is difficultto solve numerically the system and specially for k = 1 where we have a system of 4 equations egonly for the velocity vector (similar to each other variable)
6111 Time k = 0 and TaylorHood space discretization
Example 1 (k = 0 for τ = h28) Let τ = h28 We expect
eL2[0T L2(Ω)] = O(h2) and eL2[0T H1(Ω)] = O(h2)
For this mesh choise the related errors are in Table 61
Table 61 Convergence rates with k = 0 and τ = h28
Discretization Errorsτ = h28 eL2[0T L2(Ω)] eL2[0T H1(Ω)] g minus ghL2[0T L2(Ω)]
h = 04714050 0110215 181853 533150h = 02357022 0011512 043118 063211h = 01178511 0002031 011109 011369h = 00589255 0001255 002922 007081
Rate 2152143 198600 207596
6112 Time k = 1 and TaylorHood space discretization
Example 2 (k = 1 for τ = h16) Let τ = h16 We expect
eL2[0T L2(Ω)] = O(h2) eL2[0T H1(Ω)] = O(h2)
For this mesh choise the related errors are in Table 62 We emphasize that the coarse time steppingτ asymp h still gives the expected theoretical rates which highlights the implicit nature of dG timestepping schemes Here we also note that the penalty parameter satisfies α ltlt h in all mesh-sizechoices
232 6 Experiment for Stokes Equations with Distributed control
Table 62 Convergence rates with k = 1 and τ = h16
Discretization Errorsτ = h16 eL2[0T L2(Ω)] eL2[0T H1(Ω)] g minus ghL2[0T L2(Ω)]
h = 04714050 0108866 2315120 5470750h = 02357022 0010535 0453111 0607322h = 01178511 0001838 0113375 0083115h = 00589255 0000832 0028927 0020270
Rate 2343953 2107000 2686666
Example 3 (k = 1 for τ = h3210) Let τ = h3210 We expect
eL2[0T L2(Ω)] = O(h3) eL2[0T H1(Ω)] = O(h2)
For this mesh choise the errors are in Table 63 Here we take the errors in L2[0 T L2(Ω)] normwith an almost coarse choice of time-stepping
Table 63 Convergence rates with k = 1 and τ = h3210
Discretization Errorτ = h3210 eL2[0T L2(Ω)] eL2[0T H1(Ω)] g minus ghL2[0T L2(Ω)]h = 04714050 01138780 2420150 5718610h = 02357022 00104282 0455479 0610602h = 01178511 00014891 0112681 0082763h = 00589255 00004965 0028212 0020051
Rate 26137833 2140366 2718333
Example 4 (k = 1 and τ = h28) Let τ = h28 We expect
eL2[0T L2(Ω)] = O(h3) eL2[0T H1(Ω)] = O(h2)
For this mesh choice we take the results as shown in Table 64
Table 64 Convergence rate with k = 1 and τ = h28
Discretization Errorτ = h28 eL2[0T L2(Ω)] eL2[0T H1(Ω)] g minus ghL2[0T L2(Ω)]
h = 04714050 0105817 2251280 5320290h = 02357022 0010357 0461360 0618637h = 01178511 0001298 0112730 0082865h = 00589255 0000355 0028156 0020091
Rate 2739333 2106666 2671000
Remark 611 We can notice that comparing the cases of k = 0 and k = 1 see eg Tables 61 64we have almost the same errors in L2[0 T H1(Ω)] norm and almost equal with 002922 for k = 0 and0028156 for k = 1 We also see smaller errors for the L2[0 T L2(Ω)] norm equal to 0001 if k = 0to 00003 for k = 1 The minimizing functional is better minimized when k = 1 and particularly hasvalue 007 if k = 0 while if k = 1 it is 002
61 Distributed control in Stokes - description of the model 233
612 Rough initial data (discontinuity of y0 yd g)
Finally we close this section by presenting a computational example with rough (discontinuous) data y0yd and unknown true solution Once again the model problem is posed in Ωtimes [0 T ] = [0 2]2times [0 01]Here the obvious choice for the discretization in time is piecewise constants (in time) k = 0 combinedwith the standard TaylorHood element in space
We consider as solution the solution computed in the most advanced partitioned grid of the square(namely 96 times 96) comparing it with our computations in each one of the previous meshes usinginterpolation between different Uhrsquos
6121 Discretization without control constraints
We apply discontinuity on initial data and on target function yd too
Example 5 (k = 0 for τ = h28 and discontinuity) The predicted convergence rates in this exampleis
eL2[0T L2(Ω)] = O(h) rL2[0T L2(Ω)] = O(h)
We have force f = (f1 f2) like before but with discontinuity on target function control and statevariable y and on conjugate micro as below
yd(x1 x2) = (yd1(x1 x2) yd2(x1 x2))
whereyd1(x1 x2) = yd2(x1 x2) =
05 + 6 y ge 05 and x ge 05
05 y lt 05 and x lt 05
y0(x1 x2) = (y01(x1 x2) y02(x1 x2))
andy01(x1 x2) =
6 + (cos(2kx)minus 1)sin(2my) y ge 05 and x ge 05
(cos(2kx)minus 1)sin(2my) y lt 05 and x lt 05
y02(x1 x2) =
6 + sin(2mx)(1minus cos(2ky)) y ge 05 and x ge 05sin(2mx)(1minus cos(2ky)) y lt 05 and x lt 05
In order to start the gradient algorithm method we used initial control
g0(x1 x2) = (g01(x1 x2) g02(x1 x2))
with
g01(x1 x2) =
6 + ((((kνsin(kx)2 minus kνcos(kx)2 + kν)cos(my)sin(my)minus((8km2 + 8k3)sin(kx)2 + (8km2 + 8k3)cos(kx)2 minus 8km2)cos(my)sin(my)))k)
for y ge 05 and x ge 05((((kνsin(kx)2 minus kνcos(kx)2 + kν)cos(my)sin(my)minus((8km2 + 8k3)sin(kx)2 + (8km2 + 8k3)cos(kx)2 minus 8km2)cos(my)sin(my)))k)
for y lt 05 and x lt 05
234 6 Experiment for Stokes Equations with Distributed control
g02(x1 x2) =
6 + ((((k2νsin(2mx)cos(2ky)minus k2νsin(2mx)) + (minus8k2m2
minus8k4)sin(2mx)cos(2ky) + 8k2m2sin(2mx))(2k2))y ge 05 and x ge 05
((((k2νsin(2mx)cos(2ky)minus k2νsin(2mx)) + (minus8k2m2
minus8k4)sin(2mx)cos(2ky) + 8k2m2sin(2mx))(2k2))y lt 05 and x lt 05
Table 65 Convergence rates with k = 0 and τ = h28 with discontinuity on initial data and on targetfunction too
Discretization Errorτ = h28 eL2[0T L2(Ω)] rL2[0T L2(Ω)] J(yg)
h = 04714050 0126828 00079597 1480282h = 0235702 0036255 00015081 9742095h = 0117851 0014052 00004364 9608375h = 0058925 0004472 00000703 9619787h = 0029462 - - 9612306
Rate 1608596 22742714 -
6122 Discretization with control constraints
In this subsection we study the case of rough initial data and rough target function and the exactsolution is unknown In Examples 6 7 we also examine the case of control constraints into two casesrelaxed constraints minus85 le gi le 85 and more restricted constraints minus05 le gi le 05 In both cases weapply discontinuity on data and on target yd as before
Example 6 (k = 0 and τ = h28 with discontinuity and relaxed control constraints) The predictedconvergence rates from the theory are
eL2[0T L2(Ω)] = O(h) rL2[0T L2(Ω)] = O(h)
We choose f = (f1 f2) like Example 5 with discontinuity on target control state y and conjugatevariable micro (the results are in Table 66)
For the gradient algorithm starting we used control
g0(x1 x2) = (g01(x1 x2) g02(x1 x2)) = (0 0)
Example 7 (k = 0 for τ = h28 and discontinuity and strict control constraints) We also expectconvergence rates
eL2[0T L2(Ω)] = O(h) rL2[0T L2(Ω)] = O(h)
We choose f = (f1 f2) like before applying discontinuity on control state y but not on target function(the results are in Table 67)
We started the gradient method using initial control
g0(x1 x2) = (g01(x1 x2) g02(x1 x2)) = (6 6)
61 Distributed control in Stokes - description of the model 235
Table 66 Convergence rates with k = 0 and τ = h28 with discontinuity on initial data and on targetfunction and weak control constraints
Discretization Errorτ = h28 eL2[0T L2(Ω)] J(yg)
h = 0471405 0125484 1435750h = 0235702 0038590 9417572h = 0117851 0014412 9289013h = 0058925 0004503 9299375h = 0029462 - 9291695
Rate 1600097 -
Table 67 Convergence rate with k = 0 and τ = h28 and discontinuity on initial data and strict controlconstraints
Discretization Errorτ = h28 eL2[0T L2(Ω)] J(yg)
h = 0471405 0125664 2265422h = 0235702 0038621 1478615h = 0117851 0014417 1455425h = 0058925 0004504 1455310h = 0029462 - 1453629
Rate 1600733 -
Remark 612 Concerning the examples with unconstrained controls and the examples with strictcontrol constraints for rough initial data as we can see in Tables 65 and 67 we notice similar valuesfor the L2[0 T L2(Ω)] error norm and the same convergence rate as it was predicted from the theorywhile the minimizing functional has bigger values in the case of strict control constraints
Remark 613 About Figures 61 62 we can see some snapshots of the state variable for examplewith smooth data in two different meshes We note that on the bases of the Figures are shown therespective velocity vectors while the three-dimensional imaging above the basis of figure represents thepressure About Figures 63 64 65 we can see some snapshots of the state variable for the examplewith rough data for the state and adjoint variable at the beginning as the algorithm starts and aftersome time while the state variable is close to the target
Remark 614 We mention that the degrees of freedom of the above examples for each partitiondeveloped as follows
bull If τ = O(h2)8 is [Uhndof Phndof T imendof)] = [169 49 72] [625 169 288] [2401 625 1152][9409 2401 4608] ([37249 9409 18432]) for each mesh
bull If τ = O(h)16 is [Uhndof Phndof T imendof)] = [169 49 68] [625 169 136] [2401 625 272][9409 2401 544] ([37249 9409 1087])
bull If τ = O(h32)10 is [Uhndof Phndof T imendof)] = [169 49 43] [625 169 85] [2401 625 170][9409 2401 340] ([37249 9409 679])
we also note thato if k = 0 we have to solve the system and find 8 variables - 3 for the state 3 forthe adjoint and 2 for the control while for k = 1 we have to solve the system and find 14 variables -6 for the state 6 for the adjoint and 2 for the control We recall that each variable is sequence ofpolynomials in space (values at each grid point)
236 6 Experiment for Stokes Equations with Distributed control
Figure 61 State variable snapshot on mesh 12x12 and smooth initial data
Figure 62 State variable snapshot on mesh 24x24 and smooth initial data
61 Distributed control in Stokes - description of the model 237
Figure 63 State variable snapshot for rough initial data as the algorithm starts
Figure 64 State variable snapshot for rough initial data as the algorithm finishes
238 6 Experiment for Stokes Equations with Distributed control
Figure 65 Conjugate variable snapshot for rough initial data
Remark 615 Finally we recall that in the last examples (nonsmooth case) we considered as solutionthe solution in advanced grid and the degrees of freedom in that grid is the numbers enclosed inparentheses as the above remark indicates
7An Application In Biology
Experiment With DistributedControl in Semilinear Parabolic
Systems Of PdesIn this chapter we present the basic theoretical concepts and the experimental results for a distributedcontrol problem with zero Dirichlet boundary condition for a FitzHugh-Nagumo system (parabolicequations)
Contents71 Distributed control subject to FitzHugh-Nagumo systems 240
711 Introduction - Related results 240712 Description of the model 241713 Weak form 241714 The full discretized system 242715 Numerical Experiments 243
2407 An Application In Biology Experiment With Distributed Control in Semilinear
Parabolic Systems Of Pdes
71 Distributed control subject to FitzHugh-Nagumo systems
711 Introduction - Related results
The optimal control theory has a lot of useful applications in many scientific fields such as biologymedicine engineering and sociology Here we present an application related to biology that shows ushow important and directly applicable is the optimal control theory to real problems
One of the most important discoveries of the 20th century in biophysics is the understanding ofthe way that nerves carry information The basic invention is related to transportation of sodiumand potassium ions (also sodium and calcium) along the outer membrane of a cell of the nerve toelectrical signals which may propagate along the membrane after appropriate stimulation The AlanHodqkin and Andrew Huxley (working early in 1950) described the theory of ion transport theycreated a mathematical model and interpreted the experimental data for electrical signals stimulatedin squid giant axons and they were awarded the Nobel Prize in Physiology or Medicine in 1963 Theoriginal Hodgkin-Huxley model consists of a system with four odes Simplifications of the basic modelmodifications adaptable to other excitable media (eg muscle cells) and spatial dependence on spacehave been studied extensively
One of the most significant simplifications of the Hodgkin-Huxley model presented by RichardFitzhugh from the side of the mathematical and numerical analysis An electrical circuit for thismodel built by Jin-Ichi Nagumo This model of two states which is still used extensively describesthe qualitative electrical behavior of stimulated nerve cells We will study this model However weare far from fully understanding the biological excitable media Many modern studies are focusing onion transport Live membranes containing various ion channels (along the membrane) and is selectivein specific ions The transfers and switches that open and close ion channels are fundamental tothe functioning of many biological processes Also nerve cell networks and other excitable mediaare ubiquitous in biology The study of such networks can lead to understand how the brain worksMathematics are playing an increasing important role in this interdisciplinary research area
The variable state y1 represents the voltage and also called action or membrane potential andy2 called recovery variable (a voltage variable exhibits a cubic nonlinearity allowing regenerativeself-stimulation through positive feedback and the recovery variable has a linear dynamics thatprovides a slower negative feedback)
The Fitzhugh-Nagumo model is not constructed to make prediction but to capture quality character-istics of the electrical activity along a neuron
The most important provision of the model (which agrees with experimental data) is the existence ofa limit pulse stimulus that produces travelling electrical voltage (and recovery) waves that propagateaway from the spatial location of the stimulus The traveling wave membrane potential travels and itis the mechanism responsible for the transferring of information along the neuron
The Hodgkin-Huxley circuit supposedly models the electrical activity at a point of a nerve Theprocess of opening and closing ion channels is modeled by diffusion of the voltage (which correspondsto the dimensionless state y1) The spatial dependence is modeled as diffusion whereδ is the diffusivityAdding this term in the right-hand side of the circuit model and also by changing the spatial variablewe obtain the dimensionless form of Fitzhugh-Nagumo equations The Fitzhugh-Nagumo diffusionequations models the spatial coupling between ion channels along the nerve
It is noteworthy that for δ ltlt 1 our system is similar to that described in recent work [78]
71 Distributed control subject to FitzHugh-Nagumo systems 241
712 Description of the model
In this section we present a mathematical model that relates to the above description and particularlywe want to minimize the functional
J(y g) = 12
int T
0y1 minus y1d2L2(Ω) dt+ γ1
2
int T
0g12L2(Ω) dt
+ 12
int T
0y2 minus y2d2L2(Ω))dt+ γ2
2
int T
0g22L2(Ω) dt (711)
subject to
party1parttminus∆y1 + y3
1 minus y1 = minusy2 + g1 + f1 in (0 T ]times Ω y1 = 0 on (0 T ]times Γ
party2parttminus δ∆y2 + εa1y2 = εy1 + g2 + f2 in (0 T ]times Ω y2 = 0 on (0 T ]times Γ (712)
y1(0 x) = y10 y2(0 x) = y20 in Ω
and the control constraints
gia le gi(t x) le gib for ae (t x) isin (0 T )times Ω where gia gib isin R i = 1 2
713 Weak form
We begin by stating the weak formulation of the state equation Given f1 f2 isin L2 [0 T Hminus1(Ω)]
controls g1 g2 isin L2 [0 T L2(Ω)] and states y10 y20 isin L2(Ω) we seek y1 y2 isin L2[0 T H1
0 (Ω)] capH1[0 T Hminus1(Ω)] such that for ae t isin (0 T ] and for all v isin H1(Ω)
〈y1t v〉+ α(y1 v) +langy3
1 minus y1 vrang
= 〈f1 v〉+ 〈g1 v〉 and (y1(0) v) = (y10 v)〈y2t v〉+ δα(y2 v) = ε(y1 minus a1y2 v) + 〈g2 v〉+ 〈f2 v〉 and (y2(0) v) = (y20 v)
(713)
An equivalent weak formulation which is more suitable for the analysis of dG schemes is to seekunique optimal pairs (ygi gi) equiv (yi gi) isin W (0 T ) times Aad i = 1 2 Then there exists an adjointmicro1 micro2 isinW (0 T ) = L2[0 T H1(Ω)] capH1[0 T Hminus1(Ω)] satisfying micro1(T ) = micro2(T ) = 0 such that forall v isin L2[0 T H1(Ω)] capH1[0 T Hminus1(Ω)]
(y1(T ) v(T )) +int T
0
(minus〈y1 vt〉+ α (y1 v) +
(y3
1 minus y1 v))dt
= (y10 v(0)) +int T
0(〈f1 minus y2 v〉)dt+
int T
0(〈g1 v〉)dt (714)
(y2(T ) v(T )) +int T
0(minus〈y2 vt〉+ δα (y2 v))dt
= (y20 v(0)) +int T
0(〈ε(y1 minus a1y2) v〉+ 〈f2 v〉)dt+
int T
0(〈g2 v〉)dt (715)
2427 An Application In Biology Experiment With Distributed Control in Semilinear
Parabolic Systems Of Pdesint T
0
(〈micro1 vt〉+ α (micro1 v) +
lang(3y2
1 minus 1)micro1 vrang)dt
= minus(micro1(0) v(0)) +int T
0((y1 minus y1d v)) dt (716)
int T
0(〈micro2 vt〉+ α (micro2 v)minus 〈εa1micro2 v〉) dt
= minus(micro2(0) v(0)) +int T
0((y2 minus y2d v)) dt (717)
with control constraintsint T
0
int
Ω
((αg1 + micro1) (u1 minus g1) (αg2 + micro2) (u2 minus g2)
)dxdt ge 0 forallu1 u2 isin Aad (718)
In addition yit microit isin L2[0 T Hminus1(Ω)] and note that (718) is equivalent to
gi(t x) = Proj[giagib]
(minus 1αmicroi(t x)
)
for ae (t x) isin (0 T ]times Ω In addition microit isin L2[0 T H2(Ω)] cap L2[0 T L2(Ω)] i = 1 2
714 The full discretized system
Let (yh(gih) gih) equiv (yih gih) isin Uh times L2[0 T Uh] i = 1 2 denote the unique optimal pairsThenthere exists an adjoint micro1 micro2 isin Uh satisfying microN1h+ = microN2h+ = 0 such that for all υh isin Pk[tnminus1 tnUh]and for all n = 1 N
(yn1 υn) +int tn
tnminus1
(minus〈y1h υht〉+ α (y1h υh) +
(y3
1h minus y1h υh))dt
= (y1nminus1 υnminus1
+ ) +int tn
tnminus1(〈f1 minus y2h υh〉)dt+
int tn
tnminus1(〈g1 υh〉)dt (719)
(yn2 υn) +int tn
tnminus1(minus〈y2h υht〉+ δα (y2h υh))dt
= (y2nminus1 υnminus1
+ ) +int tn
tnminus1(〈ε(y1h minus a1y2h) υh〉+ 〈f2 υh〉)dt+
int tn
tnminus1(〈g2 υh〉)dt(7110)
(micron1+ υn) +
int tn
tnminus1
(〈micro1h υht〉+ α (micro1h υh) +
lang(3y2
1h minus 1)micro1h vrang)dt
= minus(micronminus11+ υnminus1
+ ) +int tn
tnminus1((y1h minus y1d v)) dt (7111)
(micron2+ υn) +
int tn
tnminus1(〈micro2h υht〉+ α (micro2h υh)minus 〈εa1micro2h υh〉) dt
= minus(micronminus12+ υnminus1
+ ) +int tn
tnminus1((y2h minus y2d v)) dt (7112)
with control constraintsint T
0
int
Ω
((αg1h + micro1h) (u1h minus g1h) (αg2h + micro2h) (u2h minus g2h)
)dxdt ge 0 (7113)
forallu1h u2h isin Adad
71 Distributed control subject to FitzHugh-Nagumo systems 243
In addition (7113) is equivalent to
gih(t x) = Proj[giagib]
(minus 1αmicroih(t x)
) i = 1 2
for ae (t x) isin (0 T ]times Ω
Due to the limits gia gib for the control variable a projection to the set of admissible controls isneeded which is given by the cutoff function
P[giagib](g) = maxgiamingib g
715 Numerical Experiments
In this section we are going to validate numerically a priori error estimates for k = 0 l = 1 (constantin time and linear in space polynomials) in the cases τ = O(h2) and τ = O(h) for the state andconjugate variable in L2[0 T H1
0 (Ω)] norm and the control in L2[0 T L2(Ω)] norm
We use an algorithm based in a steepest - descend (projected gradient) method after we initializedn = 0 ε = 1 tol g1
0| and g20 We note that eg yni is a sequence of piecewise linear polynomials in
time (and every term of this sequence is another sequence piecewise in space) in nth iteration of thegradient method
In the case of unconstrained control we assume that gia rarr minusinfin gib rarrinfin Specifically we use thecode
bull Step 0 (Initial state) For g1 = g10 g2 = g2
0 solve the system y1 = y10 y2 = y2
0
y1t minus∆y1 + y31 minus y1 = minusy2 + g1 + f1
y2t minus δ∆y2 + εa1y2 = εy1 + g2 + f2
y1Γ = y2Γ = 0y1(0 x) = y10 y2(0 x) = y20
bull Step 1 (Conjugate equation) Find micro1 = micro1n micro2 = micro2
n after solving the system
micro1t + ∆micro1 + (3y21 minus 1)micro1 = y1 minus y1d
micro2t + δ∆micro2 + εa1micro2 = y2 minus y2d
micro1Γ = micro2Γ = 0micro1(T x) = micro2(T x) = 0
bull Step 2 (New descent direction) Choose as descent direction the negative gradient of the costfunctional
minusJ prime(g1 g2) = minus(γ1g1 + micro1 γ2g2 + micro2)
bull Step 3 (Checking step εn) Find optimal size of εn
J(P[g1ag1b]g1
n + εn(γ1g1 + micro1)P[g2ag2b]g2n + εn(γ2g1 + micro2))
)=
= minεgt0
J(P[g1ag1b]g1
n + ε(γ1g1 + micro1)P[g2ag2b]g2n + ε(γ2g1 + micro2)
)
2447 An Application In Biology Experiment With Distributed Control in Semilinear
Parabolic Systems Of Pdes
bull Step 4 (New control) Set
g1n+1 = P[g1ag1b]g1
n + εn(γ1g1n + micro1
n)
g2n+1 = P[g2ag2b]g2
n + εn(γ2g2n + micro2
n)
bull Step 5 (New state) Check if Jn le Jnminus1 then set ε = 15ε If Jn ge Jnminus1 then set ε = 05εGo to Step 0 with g1 = g1
n+1 g2 = g2n+1 y1 = y1
n y2 = y2n and do n = n+ 1 Stop if
|Jn minus Jnminus1|Jn le tol
We consider the following numerical examples for the model problem with known analytical exactsolution on Ωtimes(0 T ) = (0 001)2times(0 001) and homogeneous Dirichlet boundary conditions similarlywith Chapter 5 and the one presented in [27]
We will chose parameters δ = 4 a1 = 2 L = 001 H = 001 ε = 00001 due to the example in [24]and for the size of the control regularization parameters in functional we choose γ1 = γ2 = 10minus4
Example 1 We assume target function
y1d(t x1 x2) = minus(eminusεt(ε sin(πyH)H2 sin(πxL)L2 minus sin(πyH)H2 sin(πxL)L2
+ π2 sin(πyH) sin(πxL)L2 minus sin(πyH)H2 sin(πxL)L2
+ π2 sin(πyH)H2sin(πxL)) + 3 sin(πyH)3H2 sin(πxL)3L2eminus3εt
minus 3eminusεTminus2εt sin(πyH)3H2 sin(πxL)3L2 + eminusεT (sin(πyH)H2 sin(πxL)L2
minus π2 sin(πyH) sin(πxL)L2 minus π2 sin(πyH)H2 sin(πxL))))(H2L2)y2d(t x1 x2) = ((((((2a1ε
2 minus 1) sin(πyH)H2 minus 2δεπ2sin(πyH)) sin(πxL)+ 2εsin(πyH)H2 sin(πxL))L2 minus 2δεπ2sin(πyH)H2 sin(πx)L)eT(2ε)
+ (2δεπ2et(2ε) sin(πyH)minus 2a1ε2et(2ε) sin((πy)H)H2)sin(πxL)L2
+ 2δεπ2et(2ε) sin(πyH)H2 sin(πxL))eminusT(2ε)minust(2ε))(2εH2L2)
and initial conditions
y10(x1 x2) = sin(πx1L)) sin(πx2H)y20(x1 x2) = sin(πx1L) sin(πx2H)
in a way to guarantee that the optimal solution triples (y1 micro1 g1) (y2 micro2 g2) of the above problemis given by
y1(t x1 x2) = eminusεt sin(πx1L) sin(πx2H)y2(t x1 x2) = eminust(2ε)(sin(πx1L))(sin(πx2H))micro1(t x1 x2) = (eεt minus eεT ) sin(πx1L) sin(πx2H)micro2(t x1 x2) = (et(2ε) minus eT(2ε)) sin(πx1L) sin(πx2H)
g1(t x1 x2) = PQad(
(eminus3εtminust(2ε) sin(πyH) sin((πx)L)(et(2ε) sin(πyH)2
sin((πx)L)2 minus εe2εt+t(2ε)))))
g2(t x1 x2) = PQad(a1εe
minust(2ee) sin((πy)H) sin(πxL))
We emphasize that we have limitations in control and specifically gi isin [gia gib]
71 Distributed control subject to FitzHugh-Nagumo systems 245
Table 71 Convergence Rates for the 2-d solution with control constraints in the case of k = 0 l = 1(τ = O(h)) for the control state and conjugate variable
Discretization Errorsh = 2τ eL2[0T H1
0 (Ω)] rL2[0T H10 (Ω)] g minus ghL2[0T L2(Ω)]
h = 0002357022 00439518 886349 425156e-005h = 0001178511 00214931 314208 120440e-005h = 0000589255 00108039 120744 441810e-006h = 0000294627 00054238 555306 326909e-006h = 0000147313 00027193 282740 307129e-006
Rate 10036512 124257 0947767750
Table 72 Convergence Rates for the 2-d solution with control constraints in the case of k = 0 l = 1(τ = O(h2)) for the control and conjugate variable
Discretization Errorh = τ1216 eL2[0T H1
0 (Ω)] rL2[0T H10 (Ω)] g minus ghL2[0T L2(Ω)]
h = 000235702 00448696 962116 43365e-005h = 000138889 00216560 253040 12195e-005h = 000058925 00109022 111981 44012e-006h = 000029462 00054459 571635 31558e-006
Rate 10141566 135768 126015110
Example 2 Here we concern unconstrained control function with forces
f1(t x1 x2) = (eminus3εtminust(2ε) sin(πyH) sin(πxL)(minuse2εt+t(2ε)H2L2 + e3εt
+π2e2εt+t(2ε)H2 + π2e2εt+t(2ε)L2))f2(t x1 x2) = (eminusεtminust(2ε) sin(πyH) sin(πxL)(minuseεt minus 2ε2et(2ε)
+2π2δεeεtH2 + 2π2δεeεtL2))(2ε)
target functions
y1d(t x1 x2) = 2minus cos(πxL) sin(πyH)y2d(t x1 x2) = 2minus sin(πxL) cos(πyH)
and initial conditions
y10(x1 x2) = sin(πx1L)) sin(πx2H)y20(x1 x2) = sin(πx1L) sin(πx2H)
in a way to guarantee that the optimal solution triples (y1 g1) (y2 g2) are
y1(t x1 x2) = eminusεt sin(πx1L) sin(πx2H)y2(t x1 x2) = eminust(2ε)(sin(πx1L))(sin(πx2H))g1(t x1 x2) = eminus3εtminust(2ε) sin(πx2H) sin(πx1L)(et(2ε) sin((πx2)H)2 sin((πx1)L)2
minusεe2εt+t(2ε))g2(t x1 x2) = a1εe
minust(2ε) sin(πx2H) sin(πx1L)
This optimal control problem has solved as the examples in previous chapters with package softwareFreeFem++ too see eg [64]
2467 An Application In Biology Experiment With Distributed Control in Semilinear
Parabolic Systems Of Pdes
Table 73 Functional values and convergence Rates for the 2-d solution without control constraints in thecase of k = 0 l = 1 (τ = O(h)) for the control and state variable
Discretization Errorh = 2τ eL2[0T H1
0 (Ω)] g minus ghL2[0T L2(Ω)] J(y g)h = 0002357022 00544954 474548e-005 565672e-006h = 0001178511 00219039 102414e-005 364340e-006h = 0000589255 00107374 260774e-006 349583e-006h = 0000294627 00054011 716507e-007 352582e-006h = 0000147313 00027120 246111e-007 353950e-006
Rate 10815777 18972500000 -
Table 74 Functional values and convergence Rates for the 2-d solution without control constraints in thecase of k = 0 l = 1 (τ = O(h2)) for the control and state variable
Discretization Errorh = τ1222 eL2[0T L2(Ω)] eL2[0T H1
0 (Ω)] g minus ghL2[0T L2(Ω)] J(y g)h = 0002357022 628133e-005 00544269 473965e-005 564252e-006h = 0001388890 130951e-005 00218849 102321e-005 363497e-006h = 0000589250 327452e-006 00108686 263420e-006 355844e-006h = 0000294627 819355e-007 00054478 720667e-007 355338e-006
Rate 20868133333 11068586 20131000000 -
Example 3 In this example we have constrained control function in the interval [ga gb] and withforces
f1(t x1 x2) = (eminus3εtminust(2ε)(et(2ε) sin(πyH)3H2 sin(πxL)3L2 minus εe2εt+t(2ε)
sin(πyH)H2 sin(πxL)L2 minus e2εt+t(2ε) sin(πyH)H2 sin(πxL)L2
+e3εt sin(πyH)H2 sin(πxL)L2 + π2e2εt+t(2ε) sin(πyH) sin(πxL)L2
minusPQad(eminus3εt sin(πyH) sin(πxL)(sin(πyH)2 sin(πxL)2 minus εe2εt)
)
e3εt+t(2ε)H2L2 + π2e2εt+t(2ε) sin(πyH)H2 sin(πxL)))(H2L2)f2(t x1 x2) = (eminusεtminust(2ε)(2a1ε
2eεt sin(πyH)H2 sin(πxL)L2 minus eεt sin(πyH)H2 sin(πxL)L2 minus 2ε2et(2ε) sin(πyH)H2 sin(πxL)L2 + 2π2δεeεt
sin(πyH) sin(πxL)L2 minus 2εPQad(a1εe
minust(2ε) sin(πyH) sin(πxL))
eεt+t(2ε)H2L2 + 2π2δεeεt sin(πyH)H2 sin(πxL)))(2εH2L2)
the same target function and initial conditions as Example 2 in a way to guarantee that the optimalsolution pairs (y1 g1) (y2 g2) of the above problem is given by
y1(t x1 x2) = eminusεt sin(πx1L) sin(πx2H)y2(t x1 x2) = eminust(2ε)(sin(πx1L))(sin(πx2H))
g1(t x1 x2) = PQad(eminus3εtminust(2ε) sin(πx2H) sin(πx1L)(et(2ε) sin(πx2H)2
sin(πx1L)2 minus εe2εt+t(2ε)))
g2(t x1 x2) = PQad(a1εe
minust(2ε) sin(πx2H) sin(πx1L))
71 Distributed control subject to FitzHugh-Nagumo systems 247
For this choice of data the corresponding errors for the state and the control variable for differentmeshes are shown in Tables 75 and 76
Table 75 Rates of convergence for the 2d solution with k = 0 l = 1 τ = O(h))
Discretization Errorh = τ eL2[0T H1
0 (Ω)] g minus ghL2[0T L2(Ω)] J(y g)h = 0002357022 00544956 526533e-005 565673e-006h = 0001178511 00219040 120416e-005 364340e-006h = 0000589255 00107375 321396e-006 349583e-006h = 0000294620 00054011 106383e-006 352583e-006h = 0000147310 00027120 396590e-007 353950e-006
Rate 10821677 17631825000 -
Table 76 Rates of convergence for the 2d solution with k = 0 l = 1 (τ = O(h2))
Discretization Errorh = τ1222 eL2[0T L2(Ω)] eL2[0T H1
0 (Ω)] g minus ghL2[0T L2(Ω)] J(y g)h = 0002357022 628160e-005 00544271 525886e-005 564252e-006h = 0001388890 130974e-005 00218850 120306e-005 363497e-006h = 0000589250 327688e-006 00108686 324638e-006 355844e-006h = 0000294627 821734e-007 00054478 106902e-006 355339e-006
Rate 20854400000 11068586 18734666666 -
Example 4 In this example the target function has very high values and ldquoawayrdquo from the valuesof the state variable We note that in this example the control is unconstrained The forces on theright-hand side are
f1(t x1 x2) = minus(minus(π2e(minusεt) sin(πx2H) sin(πx1L))H2) + eminus3εt sin(πx2H)3 sin(πx1L)3
minusεeminusεt sin(πx2H) sin(πx1L)minus eminusεt sin(πx2H) sin(πx1L)+eminust(2ε) sin(πx2H) sin(πx2L)
f2(t x1 x2) = (π2δeminust(2ε) sin(πx2H) sin(πx1L))H2 minus εeminusεt sin(πx2H) sin(πx1L)+a1εe
minust(2ε) sin(πx2H) sin(πx1L)minus (eminust(2ε) sin(πx2H) sin(πx1L))(2ε)
target functions
y1d(t x1 x2) = minus sin(πx2H) sin(πx1L)eminusεTminus3εt(minusεH2L2eεT+2εt minus 2H2L2eεT+2εt + π2L2eεT+2εt
+π2H2eεT+2εt + 3 sin(πx2H)2H2 sin(πx1L)2L2eεT
minus3eεt sin(πx2H)2H2 sin(πx1L)2L2 + e3εtH2L2 minus π2e3εtL2 minus π2e3εtH2))(H2L2)y2d(t x1 x2) = eminusT(2ε)minust(2ε)(2εH2L2)(((((2a1ε
2 + 2ε+ 1) sin(πx2)H)H2
minus2π2δε sin(πx2H)) sin(πx1L)L2 minus 2π2δε sin(πx2H)H2 sin(πx1L))eT(2ε)
+(2π2δεet(2ε) sin(πx2H)minus 2a1ε2et(2ε) sin(πx2H)H2) sin(πx1L)L2
+2π2δε exp(t(2ε)) sin(πx2H)H2 sin(πx1L))
2487 An Application In Biology Experiment With Distributed Control in Semilinear
Parabolic Systems Of Pdes
and he same initial conditions as Example 2 in a way to guarantee that the optimal solution triples(y1 micro1 g1) (y2 micro2 g2) of the above problem is given by
y1(t x1 x2) = eminusεt sin(πx1L) sin(πx2H)y2(t x1 x2) = eminust(2ε)(sin(πx1L))(sin(πx2H))micro1(t x1 x2) = (eεT minus eεt)eminusεTminusεt sin(πx2H) sin(πx1L)micro2(t x1 x2) = (eT(2ε) minus et(2ε))eminusT(2ε)minust(2ε) sin(πx2H) sin(πx1L)g1(t x1 x2) = π2eminusεt sin(πx2H) sin(πx1L)L2
g2(t x1 x2) = π2δeminust(2ε) sin(πx2H) sin(πx1L)L2
For this choice of data the corresponding errors for the state and the control variable for differentmeshes are shown in Tables 77 and 78
Table 77 Rates of convergence for the 2d solution with k = 0 l = 1 (τ = O(h))
Descritization Errorh = τ ey1L2[0T H1(Ω)] ey2L2[0T H1
0 (Ω)] emicro1L2[0T H1(Ω)] emicro2L2[0T H1(Ω)] g minus ghL2[0T L2(Ω)]
h = 00023570 007174 00457565 164568e-006 0017634 929315h = 00013888 002924 00192318 674248e-007 0007384 196774h = 00005892 001438 00096866 332127e-007 0003687 050788h = 00002946 000723 00048361 162793e-007 0001809 014044h = 00001473 000362 00024077 807215e-008 0000890 004936
Rate 107636 10620475 1086875 107702 -
Table 78 Rates of convergence for the 2d solution with k = 0 l = 1 (τ = O(h2))
Discretization Errorh = τ1222 ey1L2[0T H1(Ω)] ey2L2[0T H1
0 (Ω)] emicro1L2[0T H1(Ω)] emicro2L2[0T H1(Ω)] g minus ghL2[0T L2(Ω)]
h = 00023570 0071655 0044594 197478e-006 00225159 9087240h = 00013888 0029221 0019430 669304e-007 00074020 1985270h = 00005892 0014532 0009640 321201e-007 00035714 0506377h = 00002946 0007271 0004792 158142e-007 00017506 0139421h = 00001473 0003634 0002391 792880e-008 00008706 0049084
Rate 1075285 10552 115961 1173165 -
Remark 711 It should be noted that in all examples in this chapter the values for h are smallerthan those of the examples in the previous chapters This is because the experiment occurs in a moremicroscopic level and particularly to the square with edge length 001 The time step values τ issmaller too since we perform experiments with the choices τ = O(h) and τ = O(h2) This does notaffect the number of space-time degrees of freedom in each grid which is similar to previous chaptersand also similar to the size of tables which is need to be stored in computer memory
Nevertheless the expected convergence rates for errors that are observed in L2[0 T H1(Ω)] are thesame as those in the semilinear optimal control problem in Chapter 6 Thatrsquos because this problemis also an equation system with semilinear term see also the same rates in the work [24] (Fitzugh-Nagumo system without control) However in the last example using more extreme target extremevalues for control and making a more detailed study on each variable we observe much larger errorsfor control but it is noteworthy that we have again the expected convergence rates as shown in Tables78 and 77
Remark 712 Finally note that as expected comparing the problems with control constraints withthe corresponding unconstrained control problems we have similar error rates of convergence for state
71 Distributed control subject to FitzHugh-Nagumo systems 249
and conjugate variables but higher values for the control errors as well as the minimization functional(see similar phenomena and examples in evolutionary Stokes problems with constrained control inChapter 7)
List of Tables41 Rates of Convergence for the two-dimensional solution with k = 0 tau = h22 smooth
initial data and yd = 05 21542 Convergence rates for the 2d solution with k = 0 τ = h22 smooth initial data and
yd = 0 21543 Convergence rates for the 2d solution with k = 0 τ = h22 smooth initial data and
yd = 05 cos(πx1) cos(πx2) 21644 Convergence rates for the 2-d solution with k = 0 τ = h22 and nonsmooth initial data21945 Convergence rates for the 2-d solution with k = 1 l = 1 τ = O(h34) smooth initial
data and yd = 0 22046 Convergence rates for the 2-d solution with k = 1 l = 1 τ = O(h12) smooth initial
data and yd = 0 220
51 Convergence Rates for the 2-d solution in the case of k = 0 l = 1 (h = τ) 22752 Convergence Rates for the 2-d solution in the case of k = 0 l = 1 (h2 = τ) 22853 Convergence rate for the 2-d problem with k = 0 l = 1 (h2 = τ) 228
61 Convergence rates with k = 0 and τ = h28 23162 Convergence rates with k = 1 and τ = h16 23263 Convergence rates with k = 1 and τ = h3210 23264 Convergence rate with k = 1 and τ = h28 23265 Convergence rates with k = 0 and τ = h28 with discontinuity on initial data and on
target function too 23466 Convergence rates with k = 0 and τ = h28 with discontinuity on initial data and on
target function and weak control constraints 23567 Convergence rate with k = 0 and τ = h28 and discontinuity on initial data and strict
control constraints 235
71 Convergence Rates for the 2-d solution with control constraints in the case of k = 0l = 1 (τ = O(h)) for the control state and conjugate variable 245
72 Convergence Rates for the 2-d solution with control constraints in the case of k = 0l = 1 (τ = O(h2)) for the control and conjugate variable 245
73 Functional values and convergence Rates for the 2-d solution without control con-straints in the case of k = 0 l = 1 (τ = O(h)) for the control and state variable 246
74 Functional values and convergence Rates for the 2-d solution without control con-straints in the case of k = 0 l = 1 (τ = O(h2)) for the control and state variable 246
75 Rates of convergence for the 2d solution with k = 0 l = 1 τ = O(h)) 24776 Rates of convergence for the 2d solution with k = 0 l = 1 (τ = O(h2)) 24777 Rates of convergence for the 2d solution with k = 0 l = 1 (τ = O(h)) 24878 Rates of convergence for the 2d solution with k = 0 l = 1 (τ = O(h2)) 248
List of Figures41 Errors for the state and control variable for τ = h22 21642 Norm for the control function g(t)L2(Ω) 21643 Distance from target y(t) minus yd(t)L2(Ω) a) Smooth data b) Nonsmooth data -
discontinuity 21744 Effects to the control g(t)L2(Ω) as regularization parameter α varies with fixed mesh
48times 48 21745 Effects to the numerical solution and target function distance y(t)minus yd(t)L2(Ω) as α
varies 21846 Instance of the state variable 22247 Instance of the conjugate variable 222
61 State variable snapshot on mesh 12x12 and smooth initial data 23662 State variable snapshot on mesh 24x24 and smooth initial data 23663 State variable snapshot for rough initial data as the algorithm starts 23764 State variable snapshot for rough initial data as the algorithm finishes 23765 Conjugate variable snapshot for rough initial data 238
AAppendix
Contents
Appendix 1 Projections results
Appendix 2 Exponential interpolant
Appendix 3 Discrete characteristic