An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach Thesis by Sharefa Mohammad Asiri In Partial Fulfillment of the Requirements For the Degree of Masters of Science King Abdullah University of Science and Technology, Thuwal, Kingdom of Saudi Arabia May, 2013
102
Embed
An Inverse Source Problem for a One-dimensional Wave ...archive.kaust.edu.sa/.../292839/1/SharefaAsiriThesis.pdf4 ABSTRACT An Inverse Source Problem for a One-dimensional Wave Equation:
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
An Inverse Source Problem for a One-dimensional
Wave Equation: An Observer-Based Approach
Thesis by
Sharefa Mohammad Asiri
In Partial Fulfillment of the Requirements
For the Degree of
Masters of Science
King Abdullah University of Science and Technology, Thuwal,
Kingdom of Saudi Arabia
May, 2013
2
The thesis of Sharefa Mohammad Asiri is approved by the examination committee
Thus, we get exactly the same presentation form in (3.8).
It appears from the previous study of both continuous and discrete systems have
the same structures of the matrices A,B, and C, but the coefficients ai are different.
If the coefficients are functions of time, the system (continuous or discrete) is called
time-varying system; otherwise, it called invariant-system.
35
One of the most important properties of a system is stability. Stability of a system
studies the behavior of the state vector relative to an equilibrium state. There are
many definitions for stability such as uniform stability, asymptotic stability, expo-
nential stability, and bounded input, bounded output stability [44]. However, if the
systems were in time-invariant case and were written in the state-space representation,
then the stability can be studied easily through the eigenvalues of the state matrix.
The continuous time-invariant systems is stable if eigenvalues of A are located at the
left half plane while discrete time-invariant systems is stable if eigenvalues of A are
located inside the unit circle (see Figure 3.1). If the system is not stable then it should
be stabilized; otherwise, human losses, financial losses and other losses may occur.
The ability to stabilize a system depends on some conditions; the most prominent of
these conditions are controllability and its dual notation observability.
Figure 3.1: The stability region of continuous linear time-invariant systems is in theleft, and the stability region of discrete linear time-invariant systems is in the right.
In general, only few measurements (system output) are available, and we often
need to know the states for control purposes for example. So the hidden states have
to be estimated under some conditions using for instance an observer. Next section
presents what the observability of a system is.
36
3.2 Observability
Observability is a structural property of a system, and it means the ability to recon-
struct the state vector using system outputs. In other words; it means the possibility
to determine the behavior of the state using some measurements . The next definition
defines the observability in linear system [44].
Definition 4. A linear system is observable at t0 ∈ T if it is possible to determine
ξ(t0) from the output z[t0,t1] where t1 is finite time in T . If this condition is satisfied
for all t0 and ξ(t0), then the system is completely observable.
For linear time-invariant systems, we have the following theorem [44].
Theorem 2. A linear time-invariant state-space system is completely observable if
and only if the observability matrix W has full rank, i.e. rank(W ) = n; where
W =
C
CA
CA2
...
CAn−1
, (3.13)
and n is the dimension of the state matrix.
3.3 Observer
An observer is a dynamical system used to estimate the state or part of the state of an
observable dynamical system using the available input and output measurements (see
Figure 3.2). Its concept was defined by Luenberger many decades ago [45], [15]. The
Luenberger observer is well known for states estimation in linear dynamical systems.
37
Many other kinds of observer have been proposed to deal with specific and more real-
istic situations. We can classify them into adaptive observer for the joint estimation
of states and parameters [46] [47] [48], robust observers against perturbations such as
sliding mode observers [49] [50] [51], and optimal observers such as Kalman filter [52]
[53].
Figure 3.2: Observer principle
This thesis is focused on discrete linear time invariant (LTI) systems which has
the form ξ(k + 1) = Aξ(k) +Bν(k),
z(k) = Cξ(k),(3.14)
where the matrix D set to be a null matrix as the case in many physical systems. To
explain the basic idea behind the observer, we propose, as an example, the following
observer for (3.14):
ξ(k + 1) = Aξ(k) +Bν(k) + L(z(k)− z(k)),
z(k) = Cξ(k),(3.15)
where L is the observer gain matrix which will be determined to insure the convergence
of the error of estimation to zero.
If the observer error is defined as e(k) = ξ(k)− ξ(k), then the dynamics of the error
38
of (3.15) can be written as
e(k + 1) = (A− LC)e(k). (3.16)
To insure the convergence of the error to zero, the matrix (A − LC) must be
Hurwitz, which means that the eigenvalues of this matrix must be inside the unit
circle. Therefore, the observer gain matrix L should be chosen appropriately to obtain
stable system. In other words, L is chosen such that the dynamics of the observer
is much faster than the system itself; in this case the error converges asymptotically
exponentially to zero.
Observer gain matrix can be obtained by pole placement [44]. This method
consists in choosing the matrix L such that the system is still stable, i.e., the eigen-
values of the matrix (A− LC) has a magnitude strictly less than one for this discrete
system. To get this L, we first fix the appropriate eigenvalues of (A− LC), say
{λ1, λ2, · · · , λn}; then we solve the problem of determining the coefficients of the
Figure 4.4: The selected regularized parameter through L-curve, GCV, and NCP.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
4
x
Sour
ce
f
f
0 0.5 1 1.5 2−1200
−1000
−800
−600
−400
−200
0
200
x
Rel
ativ
e er
ror
in %
Figure 4.5: The exact source f and the estimated source f after Tikhonov regular-ization (left) where α was chosen using Discrepency Principle of Morozov, the erroris on the right
52
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
Sour
ce
f
f
0 0.5 1 1.5 2−100
0
100
200
300
400
500
x
Rek
ativ
e er
ror
in %
Figure 4.6: The exact source f and the estimated source f after Tikhonov regular-ization (left) where α was chosen using L-curve, the error is on the right
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
Sour
ce
f
f
0 0.5 1 1.5 2−400
−200
0
200
400
600
800
1000
1200
1400
x
Rel
ativ
e er
ror
in %
Figure 4.7: The exact source f and the estimated source f after Tikhonov regular-ization (left) where α was chosen using GCV, the error is on the right
53
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
Sour
ce
f
f
0 0.5 1 1.5 2−100
−50
0
50
100
150
200
250
300
x
Rel
ativ
e er
ror
in %
Figure 4.8: The exact source f and the estimated source f after Tikhonov regular-ization (left) where α was chosen using NCP, the error is on the right
where f at these points is zero. Thus, Table 4.1 presents the absolute errors and the
Mean squared Error (MSE).
Previous figures and table illustrates that there are different approaches for se-
lecting the regularization parameter α which give different regularized solutions. One
can notice the ability of Tikhonov method to regularize the solution if α is chosen
appropriately.
4.4 Chapter Summary
In this chapter we focused on solving inverse source problem for a one-dimensional
wave equation using Tikhonov regularization method. The operator of this inverse
problem was obtained through the solution of the direct problem. Tikhonov reg-
ularization method was applied using different regularization parameters which are
obtained using Discrepency principle, L-curve, GCV, and NCP. In the next chapter,
the same inverse problem is solved using an observer-based approach.
54
Chapter 5
An Observer to Solve Inverse
Source Problem for Wave Equation
In this chapter, we propose to apply an observer on a one-dimensional wave equation
to estimate the source. For this purpose, we propose to write the one dimensional wave
equation (4.1) in a state space representation firstly. Then, the system is discretized
in both space and time. Finally, the observer design is presented. Using this method,
the state and the source are estimated. We propose then to compare the results
obtained with observer to an original Tikhonov approach.
5.1 Problem Statement
5.1.1 A State-Space Representation for the Wave Equation
Consider IBVP of one-dimensional wave equation as in (4.1):
utt(x, t)− c2uxx(x, t) = f(x),
u(0, t) = 0, u(l, t) = 0,
u(x, 0) = r1(x), ut(x, 0) = r2(x),
(5.1)
Out aim is to solve the inverse source problem of (5.1) using an adaptive observer
55
with partial measurements of the solution u available. We first propose to rewrite
(5.1) in an appropriate form by introducing two auxiliary variables v(x, t) = u(x, t)
and w(x, t) = ut(x, t) and let
ξ(x, t) =
[v(x, t), w(x, t)
]T. (5.2)
Therefore, the (5.1) can be written as follows,
∂ξ(x, t)
∂t= Aξ(x, t) + F,
v(0, t) = 0, v(l, t) = 0,
v(x, 0) = r1(x), vt(x, 0) = r2(x),
z = Hξ(x, t),
(5.3)
where the operator A is given by A =
0 I
c2 ∂2
∂x20
, F =
0
f
, z is the output,
and H is the observation operator such that H = [H 0] where H is a restriction
operator on the measured domain.
5.1.2 Discretization
There are three known methods for discretization: finite difference method, FDM;
finite element method, FEM, and finite volume method, FVM. For simplicity and
validation, we propose to apply FDM to discretize system (5.3).
Discretizing system (5.3) using implicit Euler scheme in time and the central finite
56
difference discretization for the space gives:
vj+1i − vji
∆t= wj+1
i ,
wj+1i − wji
∆t=
c2
(∆x)2(vji−1 − 2vji + vji+1) + f ji ,
vj1 = 0, vjNx= 0,
v1i = r1(xi), v2i = ∆t r2(xi) + v1i ,
i = 1, 2, · · · , Nx, j = 1, 2, · · · , Nt,
(5.4)
where ∆x refers to the space step, ∆t refers to the time step, Nx is the space grid
size, and Nt is the time grid size. Simplifying the first two parts in (5.4) leads to:
where L is the observer gain matrix of dimension 2Nx×m, ξj and f j are the state
and source estimates respectively, Υj is a matrix sequence obtained by linearly filter-
ing B, and Σ a bounded diagonal positive definite matrix satisfying the assumptions
in Assumption 1 as in [47]:
59
Assumption 1. The diagonal positive definite matrix Σ satisfy:
1. ‖HΥjΣ12‖2 ≤ 1.
2.1
κ
∑j+κ−1i=j ΣΥiTHTHΥi ≥ βI for some constant β > 0, integer κ > 0, and all
j.
Proposition 3. Observer (5.8) is a global exponential adaptive observer for discrete
finite dimensional systems, i.e. the state estimation error ξj − ξj and the source
estimation error f j−f j converge to zero exponentially fast as j tends to infinity [47].
Proof. Let ejξ = ξj − ξj be the state error and ejf = f j − f j be the source error, thus
ej+1ξ = Gξj +Bf j + b+ L(Hξj −Hξj) + Υj+1(f j+1 − f j)−Gξj −Bf j − b
= Gejξ +Bejf − LHejξ + Υj+1(ej+1
f − ejf ) since f j+1 = f j from (5.6.c).
Therefore,
ej+1ξ = (G− LH)ejξ +Bejf + Υj+1(ej+1
f − ejf ). (5.9)
The key step of the proof is to define linear combined error sequence; let:
ηj = ejξ −Υjejf . (5.10)
Now compute the dynamic of ηj:
ηj+1 = ej+1ξ −Υj+1ej+!
f
= (G− LH)ejξ +Bejf + Υj+1(ej+1f − ejf )−Υj+1ej+1
f
= (G− LH)ejξ − (G− LH)Υjejf by using (5.8b)
= (G− LH)(ejξ −Υjejf ).
Thus,
ηj+1 = (G− LH)ηj. (5.11)
60
Since the eigenvalues of G − LH are inside the unit circle, the sequence ηj tends to
zero exponentially fast. Now, the error dynamics of the source is:
ej+1f = f j+1 − f j+1
= f j + ΣΥjTHT (Hξj −Hξj)− f j+1.
But f j+1 = f j from (5.6 c); thus,
ej+1f = ejf − ΣΥjTHTHejξ.
By substituting from (5.10),
ej+1f = [I − ΣΥjTHTHΥj]ejf − ΣΥjTHTHηj. (5.12)
First, Let us study the first part, (i.e)
ej+1f = [I − Σ(HΥj)THΥj]ejf (5.13)
Because the two conditions in Assumptions 1 are satisfied and by using Lemma
2 in Appendix A, (5.13) is exponential stable. In addition, because Σ, H, and Υ
are bounded, and the sequence ηj tends to zero fast, the second term in (5.12),
ΣΥjTHTHηj, goes to zero exponentially fast; and therefore, (5.12) goes to zero ex-
ponentially fast (by using Lemma 1 in Appendix A). Ultimately, the state error
ejξ = ηj + Υejf converges also to zero exponentially fast.
To achieve the convergence and the stability of this observer some parameters
such ∆x,∆t, and c should be tuned precisely [54].
61
5.3 Numerical Simulations
5.3.1 Preliminary
First of all, we try to explain all the conditions that appeared during the numerical
simulations work starting from the discretization. Our system was based on discrete
version of wave equation. During the work in the simulation, we found that the results
are related to five connected conditions: Courant-Fridrichs-Lewy condition (CFL),
number of measurements, observability condition, condition number of observability
matrix, and observer gain matrix. Now, we give a short description for each condition.
1. CFL condition:
CFL condition is one of the necessary condition to guarantee the stability of
the chosen scheme, especially for hyperbolic PDEs (e.g wave equation). In this
condition the time and the space steps are chosen appropriately such that
∆x
∆tc ≤ S,
where c is the wave speed and S is a scheme dependent constant.
2. Observability condition:
We can estimate only the observable states with the observers, so the observabil-
ity condition should be satisfied. This condition is affected by the discretization
scheme; moreover, it is affected by ∆t, ∆x, and the wave speed c.
3. Number of measurements:
Obviously, increasing number of measurements means increasing information
about the state; thus, insuring the observability condition for all the states.
However, for some applications, only few measurements can be available and
the idea is to study the effect of this number on the convergence of the ob-
62
server in order to find the minimum number of measurements that insures the
reconstruction of the source.
4. Condition number of observability matrix:
While the rank of the observability matrix indicates whether the system is
theoretically observable or not, its condition number measures the degree of
observability. In other words, it gives information on the potential complexity
of tuning the observer parameter. Indeed, a high condition number means that
the system is nearly unobservable, which make it difficult to choose the adequate
observer gain [55]. Moreover, as the case in metrics, the condition number of
the observability matrix affects by the size of this matrix and its kind [56]. For
distributed system such our system, the condition number is generally high.
This condition is affected by the discretization scheme and the chosen parameter
∆t, ∆x, and c. In our work, We put 104 as an upper bound for the condition
number of the observability matrix. In other words, If the condition number
exceeds 104, we try to tuning ∆t, ∆x, c, or number of measurements until we
achieve a result.
5. Observer gain matrix:
Once the full rank condition of the observability matrix is satisfied, and its
condition number is acceptable, we need to find the observer gain matrix L
such that the system (5.6) is remains stable. Finding this L is not an easy task
for such distributed system. In general, pole placement method is used to find
the observer gain matrix (see section3.3). However in our case, pole placement
often fails due to the size of the state matrix and the restricted number of
measurements. Thus, we construct L such that it has a similar structure to the
structure of the state matrix G which is sparse matrix meanwhile the eigenvalues
of (G− LH) are inside the unit circle. In this way, the degree of freedom in
63
choosing the coefficients of L is reduced.
For numerical simulation purpose, we used Matlab (2012) to write a code for the
simulation, and the parameters were set as follows. For the space, ∆x = 0.01 and
l = 2. For the time, ∆t = 0.01 and T = 100. Thus, Nx = 201 and Nt = 10001. The
velocity is chosen to be c2 = 0.9, and the source is f(x) = 3 sin(5x). The matrix Σ is
chosen to improve the estimation accuracy meanwhile satisfies the two conditions in
Assumption 1. The initial guess for the estimated source is f(x) = 0, and the initial
state is choosen to be ξ = 0.
The exact source f(x) = 3 sin(5x), and the state ξ of (5.6) are presented in Figure
5.1 and Figure 5.2
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f(x)
Figure 5.1: The exact source f(x) =3 sin(5x).
Figure 5.2: The state ξ for one-dimensionalwave equation where c2 = 0.9, f(x) =3 sin(5x), and zeros boundaries and initialconditions.
We discuss in this section two cases. First case, noise-free case. Second case, when
there are some noises in the state and the measurements. Next part gives the results
of the simulation in case of noise-free.
64
5.3.2 Noise-Free Case
Full measurements
In case of full measurements, the observation operator has the form H =
[I 0
].
Figure 5.3.a shows the efficiency of this observer to estimate the source, and Figure
5.3.b displays the relative error where the maximum relative error is 6.14× 10−11%,
and the mean squared error, (MSE), is 1.8168× 10−14.
Remark: It can be seen from Figure 5.3.b that there are a few points have relative
errors more than the others; actually, this happen due to the dividing by f in the
relative error where f is zero.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
sour
ce
f
f
0 0.5 1 1.5 2−4
−2
0
2
4
6
8x 10
−11
x
Rel
ativ
e er
ror
in %
a b
Figure 5.3: (a): the exact source f (blue) and the estimated source f (black) usingfull measurements. (b): the relative error of the source estimation in %.
Moreover, the efficiency of this observer appears in the state estimation; it esti-
mated the state with relative error 0.044%, see Figure 5.4.
65
a b
c d
Figure 5.4: State error in the noise-free case with full measurements; (a): the stateerror ξ− ξ. (b) the state relative error in %. (c): the state error, in %, after removingthe initial phase. (d): the state relative error after removing the outliers where mostof them concentrated in the initial phase.
Figure 5.5 displays the estimated source starting form the initial guess f j=1 then,
f j=20, and finally the final estimated source f j=Nt .
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f(x
)
f j = 1
f j = 2 0
f j = N k
Figure 5.5: The estimated source in different time steps starting form the initial guess.
66
Partial measurements
In the partial measurements case, Hm = H[m0,mf ] where [m0,mf ] is the observation
interval (see section 5.1.2). Figure 5.6 and Figure 5.8 display the estimated source
and the state error, respectively; where only 50% of the state components taken from
the middle. The MSE in the source estimation is 0.3354.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−1000
−500
0
500
1000
1500
x
Rel
ativ
e er
ror
in %
a b
Figure 5.6: (a): the exact source f (blue) and the estimated source f (black) usingpartial measurements (50% of the state components taken from the middle). (b): therelative error of the source estimation in %.
0 0.1 0.2 0.3 0.4 0.5−10
−8
−6
−4
−2
0
2
4
6
8
10
x
Rel
ativ
e er
ror
in %
Figure 5.7: Zoom-in for the relative error in Figure 5.6.b
As appears in Figure 5.6.b, some points have huge errors. This also happens due
67
to division by f in the computation on their relative errors where f is zero. The
actual error can be seen in Figure 5.7.
a b
c d
Figure 5.8: State error in the noise-free case with partial measurements (50% of thestate components taken from the middle); (a): the state error ξ − ξ. (b) the staterelative error in %. (c): the state error, in %, after removing the initial phase. (d):the state relative error after removing the outliers where most of them concentratedin the initial phase.
However in practice, the measurements are more accessible on the boundary. For
that purpose, we took some measurements form the end, 75% of the state compo-
nents. Figure 5.9 and Figure 5.11 display the estimated source and the state error,
respectively. In addition, MSE for the source estimation is equal 0.2096. Figure 5.10
presents the relative error regardless the points that have huge relative errors due to
deviding by zero. In both cases, partial measurements in the middle or at the end,
68
observer displays good performance.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
4
x
f
f
0 0.5 1 1.5 2−100
−50
0
50
100
150
200
x
Rel
ativ
e er
ror
in %
a b
Figure 5.9: (a): the exact source f (blue) and the estimated source f (black) usingobserver with partial measurements (50% of the state components taken from theend). (b): the relative error of the source estimation in %.
0 0.1 0.2 0.3 0.4 0.5−10
−8
−6
−4
−2
0
2
4
6
8
10
x
Rel
ativ
e er
ror
in %
Figure 5.10: Zoom-in for the relative error in Figure 5.9.b
69
a b
c d
Figure 5.11: State error in the noise-free case with partial measurements (75% ofthe state components taken from the end); (a): the state error ξ − ξ. (b) the staterelative error in %. (c): the state error, in %, after removing the initial phase. (d):the state relative error after removing the outliers where most of them concentratedin the initial phase.
5.3.3 Noise-Corrupted Case
White Gaussian random noises with zero means were added to the states and to
the measurements with standard deviations σξ = 0.007816 (SNRξ = 30db) and
σz = 0.01044 (SNRz = 20db), respectively. The relation between σ and SNR can
bee seen in (4.23). The effect of the noise on the state and the measurements can be
seen in Figure 5.12
70
a b
Figure 5.12: (a): the state ξ after adding a white noise with a standard deviationσξ = 0.0078. (b): the output z after adding a white noise with a standard deviationσz = 0.0104.
Full measurements
The estimated source in the noisy case using full measurements can be seen in Figure
5.13.a, its corresponding relative error is in Figure 5.13.b, and MSE is equal 0.28655.
The state error in this case is shown in Figure 5.15
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
4
x
f
f
0 0.5 1 1.5 2−600
−500
−400
−300
−200
−100
0
100
200
x
Rel
ativ
e er
ror
in %
a b
Figure 5.13: (a): the exact source f (blue) and the estimated source f (black) usingobserver with full measurements. (b): the relative error of the source estimation in%.
71
0 0.1 0.2 0.3 0.4 0.5−20
−15
−10
−5
0
5
10
15
20
x
Rel
ativ
e er
ror
in %
Figure 5.14: Zoom-in for the relative error in Figure 5.13.b
a b
c d
Figure 5.15: State error in the noisy case with full measurements; (a): the state errorξ − ξ. (b) the state relative error in %. (c): the state error, in %, after removing theinitial phase. (d): the state relative error after removing the outliers where most ofthem concentrated in the initial phase.
72
Partial measurements
In partial measurements noisy case, the estimated source, its corresponding relative
error, and the error in the state are displayed in Figure 5.16.a, Figure 5.16.b and
Figure 5.18, respectively; where the measurements taken form the middle. MSE for
source estimation is equal 0.4014.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
4
x
f
f
0 0.5 1 1.5 2−1000
−500
0
500
1000
x
Rel
ativ
e er
ror
in %
a b
Figure 5.16: (a): the exact source f (blue) and the estimated source f (black) usingpartial measurements (50% of the state components taken from the middle). (b): therelative error of the source estimation in %.
0 0.1 0.2 0.3 0.4 0.5−10
−8
−6
−4
−2
0
2
4
6
8
10
x
Rel
ativ
e er
ror
in %
Figure 5.17: Zoom-in for the relative error in Figure 5.16.b
73
a b
c d
Figure 5.18: State error in the noisy case with partial measurements (50% of thestate components taken from the middle); (a): the state error ξ − ξ. (b) the staterelative error in %. (c): the state error, in %, after removing the initial phase. (d):the state relative error after removing the outliers where most of them concentratedin the initial phase.
For the practical case where the measurements taken from the end, Figure 5.19.a
and Figure 5.19.b display the estimated source and it’s corresponding relative error,
respectively; where MSE is equal 0.3213 while the error in the state is shown in Figure
5.20.
74
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−500
−400
−300
−200
−100
0
100
200
300
400
500
x
Rela
tive
erro
r in
%
a b
Figure 5.19: (a): the exact source f (blue) and the estimated source f (black) usingpartial measurements (50% of the state components taken from the end). (b): therelative error of the source estimation in %.
a b
c d
Figure 5.20: State error in the noisy case with partial measurements (75% of thestate components taken from the end); (a): the state error ξ− ξ. (b) the state relativeerror in %. (c): the state error, in %, after removing the initial phase. (d): the staterelative error after removing the outliers where most of them concentrated in theinitial phase.
75
0 0.1 0.2 0.3 0.4 0.5−10
−8
−6
−4
−2
0
2
4
6
8
10
x
Rel
ativ
e er
ror
in %
Figure 5.21: Zoom-in for the relative error in Figure 5.19.b
5.4 Comparison Between Observer and Tikhonov
To asses the observer performance in the source estimation, we need to compare
its performance with a standard method such as Tikhonov regularization. However,
Tikhonov does not give a recursive (sequential) estimation for the source f . Therefore,
we propose a new Tikhonov approach based on the use of Hankel matrix in order to
estimate the unknown recursively. The general idea is to derive the state and output
at time k + p from the state at time k and the input sequence, using the state space
matrices where p is a constant, p ≥ 0; thus, we get a new state-space representation
where the transmission matrix is a Hankel matrix [57].
As was presented before, the linear time-invariant discrete system can be written
as:
ξ(k + 1) = Aξ(k) +Bu(k),
z(k) = Cξ(k) +D(k)u(k).(5.14)
Thus, By repeating substitution, one can get for some p ≥ 0,
76
ξ(k + p) = Apξ(k) + Cpup(k),
zp(k) = Oξ(k) + τup(k),(5.15)
where
up(k) =
u(k)
u(k + 1)
...
u(k + p− 1)
; (5.16)
zp(k) =
z(k)
z(k + 1)
...
z(k + p− 1)
; (5.17)
Cp =
[Ap−1B · · · AB B
](5.18)
O =
C
CA
...
CAp−1
; (5.19)
and
τ =
D 0 0 · · · 0
CB D. . . . . .
...
CAB CB D. . . 0
.... . . . . . . . . 0
CAp−2B · · · CAB CB D
. (5.20)
77
In light of that, the system (5.6) can be written as:
ξj+p = Gpξj + Cpf jp + b,
zj = Oξj + τf jp
(5.21)
where b = 1p⊗
b, and⊗
is the Kronecker product. Thus, from the second equations
in (5.6) and (5.21), a new measurements can be defined as
zj = τf j, (5.22)
where zj = zj −Oξj=1.
The aim is to estimate the source f by minimizing the following cost function
where a Tikhonov regularization is used:
Jα(f) =1
2‖τf j − zj‖22 +
α
2‖f j‖22 (5.23)
By differentiation (5.23) and equate it with zero, one can get
(τ ∗τ + αI)f j = τ ∗zj (5.24)
Thus,
f j = Rατ∗zj (5.25)
where the Tikhonov operator Rα = (τ ∗τ + αI)−1τ ∗.
5.4.1 Numerical Simulations
If the space step ∆x, time step ∆t and the final time T are used as in section 5.3,
the size of the Hankel matrix τ will be very huge, and we can not deal with it using
Matlab. Therefore, we replace the space step, time step and the final time to be
78
∆x = 0.1, ∆t = 0, 05, and T = 2, respectively. In this section, we discuss the
two cases: noise-free case and noise-corrupted case as were studied in section 5.3. In
addition, in each case we discuss the cases of full measurements, partial measurements
in the middle and partial measurements at the end. For both partial measurements
in the middle and partial measurements at the end, 50% of the state components are
taken. In the simulations of the original Tikhonov regularization, we applied L-curve,
GCV, and NCP approaches for selecting the regularization parameter α, then we
present the best result.
The Noise-Free Case
Full measuremets
Figure 5.22 and Figure 5.23 represent the estimated source using observer and Tikhonov,
respectivly. From Figure 5.24, it appears that both observer and Tikhonov approaches
presented a good performance to estimate the source whatever the used approach for
selecting the regularization parameter α. Relative errors for this case can be seen in
Table 5.1.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−2
0
2
4
6
8
10
12
14x 10
−3
x
Rel
ativ
e er
ror
in %
a b
Figure 5.22: (a): the exact source f (blue) and the estimated source f (black) usingobserver with full measurements. (b): the relative error of the source estimation in%.
79
Table 5.1: Relative errors for noise-free case (full measurements)
method‖f − f‖‖f‖
× 100 max(f − ff× 100)
Observer 0.000577662% 0.0131826%Tikhonov with L-curve 4.68842705436554e− 11% 1.32058070134541e− 10%Tikhonov with GCV 4.68842705436554e− 11% 1.32058070134541e− 10%Tikhonov with NCP 9.37414665016344e− 11% 2.50190556173606e− 11%
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2x 10
−10
xR
elat
ive
erro
r in
%
a b
Figure 5.23: (a): the exact source f (blue) and the estimated source f (black)using Tkhonov with full measurements. (b): is the corresponding relative error ofthe source estimation in %.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f T i k .
f o b s
Figure 5.24: Comparison between observer and Tikhonov in noise-free case with fullmeasurements
80
Partial measurements taken from the middle
Figure 5.25 and Figure 5.26 present the estimated source using observer and Tikhonov,
respectively; where 50% of the state components taken form the middle. From the
two figures and Figure 5.27, it appears that observer approach gives better estimation,
see also Table 5.2.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−200
−100
0
100
200
300
400
xR
elat
ive
erro
r in
%
a b
Figure 5.25: (a): the exact source f (blue) and the estimated source f (black) usingobserver with partial measurements in the middle. (b):the relative error of the sourceestimation in %.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−100
−50
0
50
100
x
Rel
ativ
e er
ror
in %
a b
Figure 5.26: (a): the exact source f (blue) and the estimated source f (black) usingTkhonov with partial measurements in the middle. (b): the corresponding relativeerror of the source estimation in %.
81
Table 5.2: Relative errors for noise-free case (partial measurements form the middle)
method‖f − f‖‖f‖
× 100
Observer 28.3223%Tikhonov with L-curve 75.8694%Tikhonov with GCV 75.8694%Tikhonov with NCP 75.8694%
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f T i k .
f o b s
Figure 5.27: Comparison between observer and Tikhonov in noise-free case withpartial measurements taken from the middle.
Partial measurements taken from the end
As mentioned before, the measurements are more accessible on the boundary. Thus,
we took partial measurements form the end of the domain, 50% of the state compo-
nents. The results for this case can be seen in Figure 5.28, Figure 5.29, and Figure
5.30. It is clear that Tikhonov is completely unable to recover the interval where no
measurements are available while observer has somewhat a good estimation in this
interval. Thus, also the in case of partial measurements from the end, the estimated
source using observer is better than the estimation using Tikhonov.
82
Table 5.3: Relative errors for noise-free case (partial measurements form the end)
method‖f − f‖‖f‖
× 100
Observer 23.8103%Tikhonov with L-curve 73.5045%Tikhonov with GCV 73.5045%Tikhonov with NCP 73.5045%
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−100
0
100
200
300
400
xR
elat
ive
erro
r in
%
a b
Figure 5.28: (a): the exact source f (blue) and the estimated source f (black) usingobserver with partial measurements at the end. (b):the relative error of the sourceestimation in %.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−20
0
20
40
60
80
100
x
Rel
ativ
e er
ror
in %
a: L-curve b
Figure 5.29: (a): the exact source f (blue) and the estimated source f (black) usingTkhonov with partial measurements in the middle. (b): the corresponding relativeerror of the source estimation in %.
83
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f T i k .
f o b s
Figure 5.30: Comparison between observer and Tikhonov in noise-free case withpartial measurements taken from the end
The Noise-Corrupted Case
In the simulations of this noisy case, we found that observer need to be more robust;
therefore, we add minor modifications to the observer (5.8) to obtain an adaptive
observer but with a sliding mode-like term as follows:
zj = Hξj,
Υj+1 = (G− LH)Υj +B,
f j+1 = f j + ΣΥjTHT [(zj − zj) + γ1 tanh(γ2(zj − zj))] ,
In addition, L-curve, GCV, and NCP sometimes in the noisy case fail to investigate
a good regularization parameter. To treat that, we tried to find the regularization
parameter manually.
84
Full measurements
Figure 5.31 shows the estimation source using observer which is a good estimation in
this noisy case. The estimated source using Tikhonov with the selected α is shows in
Figure 5.32. Figure 5.33 compares the estimated source using the two approaches, and
it illustrates that observer has better estimation than Tikhonov, see also Table 5.4.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
4
x
f
f
0 0.5 1 1.5 2−10
0
10
20
30
40
xR
elat
ive
erro
r in
%
a b
Figure 5.31: (a): the exact source f (blue) and the estimated source f (black) usingobserver with full measurements. (b):the relative error of the source estimation in %.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−50
0
50
100
150
200
250
300
x
Rel
ativ
e er
ror
in %
a: selected α b
Figure 5.32: (a): the exact source f (blue) and the estimated source f (black) usingTikhonov with full measurements in the noisy case where α was selected manually.(b): the corresponding relative error of the source estimation in %.
85
Table 5.4: Relative errors for the noisy case (full measurements)
method‖f − f‖‖f‖
× 100
Observer 9.74384%Tikhonov with L-curve 4464.34418%Tikhonov with GCV 41.11148%Tikhonov with NCP 59.47617%
Tikhonov with selected α 11.14185%
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
4
x
f
f T i k .
f o b s .
Figure 5.33: Comparison between observer and Tikhonov in noise-corrupted casewith full measurements.
Partial measurements in the middle
The estimated source using observer and Tikhonov can be seen in Figure 5.34 and
Figure 5.35, respectively, and the relative errors can be seen in Table 5.5. The compar-
ison between the two methods is displayed in Figure 5.36. These figures illustrate that
both observer and Tikhonov have approximately close results with a little excellence
in the observer estimation.
86
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−100
−50
0
50
100
150
200
250
x
Rel
ativ
e er
ror
in %
a b
Figure 5.34: (a): the exact source f (blue) and the estimated source f (black) usingobserver with partial measurements taken from the end. (b):the relative error of thesource estimation in %.
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−100
−50
0
50
100
150
x
Rel
ativ
e er
ror
in %
a b
Figure 5.35: (a): the exact source f (blue) and the estimated source f (black) usingTkhonov with partial measurements from the middle. (b): the corresponding relativeerror of the source estimation in %.
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
f
f T i k .
f o b s .
Figure 5.36: Comparison between observer and Tikhonov in the noise-corrupted casewith partial measurements taken form the middle.
87
Table 5.5: Relative errors for the noisy case (partial measurements from the middle)
method‖f − f‖‖f‖
× 100
Observer 42.92428%Tikhonov with L-curve 100%Tikhonov with GCV 32.58996%Tikhonov with NCP 32.34354%
Tikhonov with selected α 49.04133%
Partial measurements at the end
By seeing Figure 5.37, Figure 5.38, Figure 5.39, and Table 5.6, one can conclude that
observer gave better estimation than Tikhonov. Table 5.7 presents the MSE in the
noise-corrupted case with partial measurements.
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−200
−100
0
100
200
300
400
500
x
Rel
ativ
e er
ror
in %
a b
Figure 5.37: (a): the exact source f (blue) and the estimated source f (black) usingobserver with partial measurements taken from the end. (b): the correspondingrelative error of the source estimation in %.
88
Table 5.6: Relative errors for noisy case (partial measurements from the end)
Method‖f − f‖‖f‖
× 100
Observer 32.18883%Tikhonov with L-curve 100%Tikhonov with GCV 50.34026%Tikhonov with NCP 48.83239%
Tikhonov with selected α 50.29478
0 0.5 1 1.5 2−3
−2
−1
0
1
2
3
x
f
f
0 0.5 1 1.5 2−50
0
50
100
150
200
250
300
350
x
Rel
ativ
e er
ror
in %
a b
Figure 5.38: (a): the exact source f (blue) and the estimated source f (black) usingTkhonov with partial measurements taken from the end. (b): the correspondingrelative error of the source estimation in %.
0 0.5 1 1.5 2−4
−3
−2
−1
0
1
2
3
x
f
f T i k .
f o b s .
Figure 5.39: Comparison between observer and Tikhonov in the noise-corrupted casewith partial measurements taken from the middle.
The numerical simulations in this section are proven that in general observer
approach gave better estimation than Tikhonove regularization method, and in the
worst cases, observer gives the same performance as Tikhonov.
89
Table 5.7: MSE in the noisy case (partial measurements)method Middle End
Observer 0.907404289806715 0.680460568436796Tikhonov with L-curve 2.11396522810324 2.11396522810324Tikhonov with GCV 0.796284023328078 1.58249288930699Tikhonov with NCP 0.777861070191947 1.54960272106339
Tikhonov with selected α 1.03663795787186 1.54659139143788
5.5 Chapter Summary
In this chapter, we presented how observers can be applied on PDE through the one-
dimensional wave equation. Then, numerical simulations for estimating the source
and the state of our one-dimensional wave equation system are displayed. These
numerical results were presented in absence and presence of noise. They proved the
efficiency of observer-based approach to estimate unknowns in a distributed system.
To emphasis this efficiency, a comparison between observer method and the opti-
mization of cost functions with a Tikhonov regularization was built. This comparison
considered also the two main cases: noise-free case and noisy case. In both cases,
the full measurements and the partial measurements cases were studied. The results
proved generally that observer has better result than Tkihnonv approach. More-
over, the Tikhonov regularization that we introduced is approximately ineffective to
estimate the source in the interval where no measurements are available.
90
Chapter 6
Conclusion
Observers for solving problems governed by partial differential equations are being
actively used. In this thesis, we have demonstrated the performance of the observer
to estimate the states and the unknown source for a one-dimensional wave equation.
Moreover, the study covered the two cases: noise-free case and noise-corrupted case.
We investigated that the states and source estimation errors tend to zero exponentially
fast. The effectiveness of the observer-based approach is strongly confirmed through
a comparison between observer and Tikhonov regularization approaches.
In Chapter 2, the general idea of inverse problems was introduced with some ex-
amples. We highlighted the fact that inverse problems are generally ill-posed due to
the non continuity between the data and the unknown which leads to an instability.
We saw that to solve an inverse problem we usually use optimization techniques.
However to overcome the ill-posedness of the problem, we usually require some regu-
larization. We focused more on Tikhonov regularization which is widely used. Four
different approaches for selecting the regularization parameter were presented.
In Chapter 3, we recalled some basic definitions on observer theory starting from
state-space representation, passing by the observability condition, and ending with
the concept of observers.
In Chapter 4, we stated our problem then the direct problem was solved to to find
an operator relating the unknown to the measurements. After that, the ill-posedness
91
of the problem was proved. Then we proposed to solve the problem using a mini-
mization approach with tikhonov regularization method. Moreover, the regularized
parameter was chosen through the four different methods: Discrepancy Principle of
Morozov, L-curve, GCV, and NCP.
In Chapter 5, first, we rewrote the one dimensional wave equation in a state-space
representation. Then we wrote down the problem in a discretized version. After that,
an adaptive observer for the joint estimation of the source and the states was designed.
Numerical simulations for the source and states estimation using observer were pre-
sented, and they have proven the capability of observer to estimate both the source
and the states. Finally, to asses the observer in the source estimation, we compared
its performance with a standard approach which is Tikhonov method considering dif-
ferent number of measurements full and partial and considering the noise-free and
noisy cases. The estimation results confirmed the superiority of observer to estimate
the unknown source.
Finally, we point out that future work can extend to the following research topics:
• Study the discretization effect on the convergence of the adaptive observer that
has been used in this thesis.
• Analyze some numerical issues encountered during some simulations with partial
measurements such as the ill-conditioning of the observability matrix.
• Study the inverse source problem for a two-dimensional wave equation using
observer-based approach.
• Study inverse problem for wave equation where the speed of the wave is the
unknown.
92
REFERENCES
[1] R. Haberman, Applied partial differential equations : with Fourier series and