FUNCTIONAL ANALYSIS
HILBERT SPACES
Diego AVERNA?
? Dipartimento di Matematica e Informatica
Facolta di Scienze MM.FF.NN.
Via Archirafi, 3490123 Palermo (Italy)
http://math.unipa.it/averna/
• Traduzione in inglese della dispensa ”Analisi Funzionale  Spazi diHilbert ([1])”. La Traduzione e stata curata dalla Dott.ssa Gugliuzza Mariantonietta, mia studentessa durante l’A.A. 2007/08. I mieiringraziamenti anche da parte di chi non sa parlare l’italiano.
First Edition 26/12/2007. Last Edition 23/11/2011.This document is printable if taken by
http://www.unipa.it/averna/did/Analisi Funzionale/index.htmlor
http://math.unipa.it/averna/did/Analisi Funzionale/index.html.Typeset by AMSLATEX
David Hilbert (18621943)
Remember that when we talk about scientific problems you are completely free totell me that I’m wrong, because face to the science we are the same.
(Mauro Picone (1885  1977), professor of mathematics analysis Rome,in an answer to his pupil Ennio De Giorgi (1928  1996),
become a big mathematician, too).
Contents
List of Figures 5
Chapter 1. HILBERT SPACES 71. Prehilbertian spaces 72. Normed linear spaces 143. The Hilbert space l2 204. The Hilbert space L2 24
Chapter 2. GEOMETRY OF THE HILBERT SPACES 271. Subspaces 272. Orthogonal subspaces 303. Bases 374. Isomorphisms 40
Chapter 3. BOUNDED AND LINEAR OPERATORS 431. Bounded and linear applications (operators) 432. Linear operators 473. Bilinear forms 534. Added operators 565. Projection operators 61
Bibliography 65
Index 67
3
List of Figures
1 Parallelogram law 11
2 The uniform norm isn’t induced by any inner product 14
3 C(I) of the example 1.4 isn’t complete 18
1 f − PMf is orthogonal to M 34
5
CHAPTER 1
HILBERT SPACES
1. Prehilbertian spaces
K will denote or the field of complex numbers C, or the field of real numbers IR.The elements of K are called scalars and they will denote by the greek letters. α willdenote the CONJUGATE of the scalar α. Hence α = α⇐⇒ α ∈ IR.
Def. 1.1. Let L be a vector space on K. An INNER PRODUCT ON L isa function from L × L → K, denoted by (f, g) 7→ 〈f, g〉 such that it verifies thefollowing properties:
p1) 〈f1 + f2, g〉 = 〈f1, g〉+ 〈f2, g〉p2) 〈αf, g〉 = α 〈f, g〉p3) 〈g, f〉 = 〈f, g〉p4) 〈f, f〉 ≥ 0 (note that 〈f, f〉 ∈ IR from p3))p5) 〈f, f〉 = 0⇐⇒ f = 0
Def. 1.2. If L is a vector space on K supplied with an inner product, then L iscalled PREHILBERTIAN space (su K).
From now on L will denote a fixed prehilbertian space on K.
Theorem 1.1. If 〈., .〉 is an inner product on a Klinear space L, then:
p’1) 〈f, g1 + g2〉 = 〈f, g1〉+ 〈f, g2〉 , ∀f, g1, g2 ∈ Lp’2) 〈f, αg〉 = α 〈f, g〉 , ∀f, g ∈ L,∀α ∈ Kp’3) 〈0, g〉 = 0 = 〈f, 0〉 , ∀f, g ∈ LMoreover: ⟨
n∑k=1
αkfk,
m∑h=1
βhgh
⟩=
n∑k=1
m∑h=1
αkβh 〈fk, gh〉 .
Proof. p′1) 〈f, g1 + g2〉 = 〈g1 + g2, f〉 = 〈g1, f〉+ 〈g2, f〉 = 〈f, g1〉+ 〈f, g2〉 =
〈f, g1〉+ 〈f, g2〉 = 〈f, g1〉+ 〈f, g2〉.p′2) 〈f, αg〉 = 〈αg, f〉 = α 〈g, f〉 = α〈g, f〉 = α 〈f, g〉.p′3) 0 + 〈0, g〉 = 〈0, g〉 = 〈0 + 0, g〉 = 〈0, g〉 + 〈0, g〉 =⇒ 〈0, g〉 = 0. Thus, 〈f, 0〉 =
〈0, f〉 = 0
7
8 1. HILBERT SPACES
Prove by induction. �
Example 1.1. In the ndimensional euclidean space, IRn = {a = (α1, . . . , αn) :αk ∈ IR} we define:
a+ b = (α1 + β1, . . . , αn + βn)
λa = (λα1, . . . , λαn)
〈a, b〉 = Σnk=1αkβk
Prove that the axioms p1), p2), p3), p4), p5) are obviously verified.
Example 1.2. Let n ≥ 1. We consider the ndimensional unitary space Cn ={a = (α1, . . . , αn) : αk ∈ C, k = 1, . . . , n} and define:
a+ b = (α1 + β1, . . . , αn + βn)
λa = (λα1, . . . , λαn)
〈a, b〉 = Σnk=1αkβk
Verify that all the mentioned conditions in the definition 1.2 are satisfied.
Example 1.3 (Finite sequences). Let L = {a = (αk)∞k=1 : αk ∈ C per k ∈
IN, αk = 0 per k > n(a)} and we define:
a+ b = (αk + βk)∞k=1
λa = (λαk)∞k=1
〈a, b〉 = Σ∞k=1αkβk
Note: The series reduces to a finite sum.Verify that L is a prehilbertian space.
Example 1.4. Let a, b ∈ IR with a < b. Let I = [a, b]. We denote by C(I) theclass of all continuous functions f : I → C. We define (punctually on I)
(f + g)(x) = f(x) + g(x), ∀x ∈ I
(λf)(x) = λf(x), ∀x ∈ IMoreover, let:
1. PREHILBERTIAN SPACES 9
〈f, g〉 =
∫ b
a
f(x)g(x)dx
Prove that C(I) is a space supplied with an inner product.
The inner product takes in a natural way to the following one:
Def. 1.3. For every vector f ∈ L, let ‖f‖ denote the nonnegative real num
ber +√〈f, f〉. Then ‖f‖ is called the NORM OF f (INDUCED BY THE INNER
PRODUCT 〈., .〉 ON L).A vector with norm 1 is called UNITY VECTOR.
Theorem 1.2 (CauchySchwarzBunyakovsky inequality).
 〈f, g〉  ≤ ‖f‖‖g‖Moreover:The equality holds ⇐⇒ f and g are linearly dependent.
Proof. If g = 0 then the conclusion is trivial. Therefore, assume g 6= 0 and
〈g, g〉 =: ‖g‖2 > 0. Put α := 〈f,g〉‖g‖2 . Hence, one has:
(1) 0 ≤ 〈f − αg, f − αg〉 = ‖f‖2 − α 〈g, f〉 − α 〈f, g〉+ α2‖g‖2 = ‖f‖2 −  〈f, g〉 2
‖g‖2
namely  〈f, g〉 2 ≤ ‖f‖2‖g‖2 so, by extracting the square root  〈f, g〉  ≤ ‖f‖‖g‖.Moreover if f and g are linearly dependent, and taking into account f = αg, one
has:
 〈f, g〉  =  〈αg, g〉  = α‖g‖2 = α‖g‖‖g‖ = ‖f‖‖g‖.On the contrary, if  〈f, g〉  = ‖f‖‖g‖, by squaring, we obtain  〈f, g〉 2 = ‖f‖2‖g‖2.
So, from (1), we obtain α such that 〈f − αg, f − αg〉 = 0 from which f − αg = 0. �
Theorem 1.3. If (L, 〈., .〉) 6= 0 =⇒ ‖f‖ = sup‖g‖=1  〈f, g〉 .
Proof. If f = 0 the assertion is banal because ‖0‖ = 0 = sup‖g‖=1  〈0, g〉  = 0If f 6= 0 =⇒ ‖f‖ 6= 0
‖f‖ = 〈f, f/‖f‖〉 ≤ sup‖g‖=1
 〈f, g〉  ≤ sup‖g‖=1
‖f‖‖g‖ = ‖f‖.
�
Theorem 1.4. The norm ‖.‖ induced by the inner product 〈., .〉 on a prehilbertianKspace L has the following properties:
N1) ‖f‖ ≥ 0,∀f ∈ LN2) ‖f‖ = 0⇐⇒ f = 0
10 1. HILBERT SPACES
N3) ‖αf‖ = α‖f‖,∀f ∈ L,∀α ∈ KN4) ‖f + g‖ ≤ ‖f‖+ ‖g‖,∀f, g ∈ LMoreover:The equality in N4) is valid ⇐⇒ g = 0 or f = λg with λ ≥ 0 and g 6= 0.
Proof. Only N4) is to prove.‖f+g‖2 = ‖f‖2 +2 Re 〈f, g〉+‖g‖2 ≤ ‖f‖2 +2 〈f, g〉 +‖g‖2 ≤ ‖f‖2 +2‖f‖‖g‖+
‖g‖2 = (‖f‖+ ‖g‖)2.We prove that ‖f + g‖ = ‖f‖+ ‖g‖ if and only if f = λg with λ ≥ 0 and g 6= 0.We suppose f = λg with λ ≥ 0, from this one it follows that:
‖f+g‖ = ‖λg+g‖ = ‖(λ+1)g‖ = λ+1‖g‖ = λ‖g‖+‖g‖ = ‖λg‖+‖g‖ = ‖f‖+‖g‖.Now, we suppose that ‖f + g‖ = ‖f‖+ ‖g‖; since:
‖f + g‖2 ≤ ‖f‖2 + 2 〈f, g〉 + ‖g‖2 ≤ ‖f‖2 + 2‖f‖‖g‖+ ‖g‖2
from the hypothesis it follows that:
 〈f, g〉  = ‖f‖‖g‖and so there exists λ such that f = λg; now, we must prove that λ ≥ 0, in fact
being
‖f + g‖ = ‖λg + g‖ = ‖(λ+ 1)g‖ = λ+ 1‖g‖and moreover
‖f + g‖ = ‖f‖+ ‖g‖ = ‖λg‖+ ‖g‖ = (λ+ 1)‖g‖it follows that
λ+ 1 = λ+ 1
and this one implies 1
λ = λ.
�
Theorem 1.5 (Parallelogram law). The norm ‖.‖ induced by the inner product〈., .〉 on a prehilbertian Kspace L has the following properties:
‖f + g‖2 + ‖f − g‖2 = 2‖f‖2 + 2‖g‖2,∀f, g ∈ L.
1 Let λ + 1 = λ + 1, if we suppose that λ is complex and we square the two terms of theequality knowing that it is true if and only if λ is a nonnegative real complex.
1. PREHILBERTIAN SPACES 11
Proof.
‖f + g‖2 = 〈f + g, f + g〉 = ‖f‖2 + 〈f, g〉+ 〈g, f〉+ ‖g‖2
‖f − g‖2 = 〈f − g, f − g〉 = ‖f‖2 − 〈f, g〉 − 〈g, f〉+ ‖g‖2
summing member to member we have the thesis. �
Remark 1.1. In IR2 the Theorem 1.5 is very well visualized: The sum of squareson the diagonals of a parallelogram equals the sum of squares on its sides.

‖g‖��������3‖f‖
?‖f − g‖�
��������
����
����
����
���1
‖f + g‖
?
Figure 1. Parallelogram law
Def. 1.4. Two VECTORS f, g ∈ L are mutually said ORTHOGONAL (orPERPENDICULAR), f ⊥ g, if 〈f, g〉 = 0.
A FAMILY F = {fσ}σ∈Σ ⊂ L is said ORTHOGONAL if fσ1 ⊥ fσ2 ∀ σ1, σ2 ∈ Σwith σ1 6= σ2.
A FAMILY F = {fσ}σ∈Σ ⊂ L is called ORTHONORMAL if F is orthogonal and‖fσ‖ = 1, ∀ σ ∈ Σ.
Def. 1.5. A family F = {f1, . . . , fn} ∈ L is said LINEARLY INDEPENDENTif the relation
α1f1 + . . .+ αnfn = 0, ∀αi ∈ Kimplies αi = 0, i = 1, . . . , n. If this one doesn’t happen the family is said LIN
EARLY DEPENDENT.Analogously an infinite family F = {fσ}σ∈Σ ∈ L will be said LINEARLY INDE
PENDENT if however taken its distinct elements n, ∀ n ∈ IN they result linearlyindependent.
12 1. HILBERT SPACES
Example 1.5. We consider the space C(I) like in the example 1.4. For k ∈ ZZwe define:
ek(x) =1√b− a
e2πik x−ab−a , for every x ∈ [a, b].
Then ek ∈ C(I) and moreover {ek : k ∈ ZZ} is an orthonormal family. In fact:
〈ek, en〉 =
∫ b
a
ek(x)en(x)dx =1
b− a
∫ b
a
e2πi(k−n)(x−a)
b−a dx = δn,k = 2
{1 if k = n
0 if k 6= n.
(remember e2πi = cos 2π + i sin 2π).
Theorem 1.6. An orthogonal family F ⊂ (L, 〈., .〉) of nonzero vectors is linearlyindependent.
Proof. Let F = {fn} be the orthogonal family of nonzero vectors and we supposethat it’s linearly dependent.
n∑i=1
αifi = 0 =⇒ αi 6= 0, for at least an i
we suppose αn 6= 0.We make the inner product for fn, after we have taken to the second member the
term containing fn ⟨n−1∑i=1
αifi, fn
⟩= 〈−αnfn, fn〉
We have so (fi ⊥ fn, i = 1, ..., n− 1, nonzero fn)
0 = αn.
�
Theorem 1.7 (Pythagorean theorem). If f, g ∈ (L, 〈., .〉), f ⊥ g =⇒ ‖f ± g‖2 =‖f‖2 + ‖g‖2.
Proof. ‖f ± g‖2 = 〈f ± g, f ± g〉 = ‖f‖2±〈f, g〉± 〈g, f〉+‖g‖2 = ‖f‖2 +‖g‖2. �
Corollary 1.1. If f, g ∈ (L, 〈., .〉), f ⊥ g, ‖f‖ = ‖g‖ = 1 =⇒ ‖f − g‖ =‖f + g‖ =
√2
2 In mathematics for Kronecker delta (with its name it remembers the german matematicianLeopold Kronecker (18231891)) we intend a function of two variables (in particular a function oftwo variables on integers), which holds 1 if their values coincide, while it holds 0 on the contrarycase.
1. PREHILBERTIAN SPACES 13
Corollary 1.2. If {fk}nk=1 ⊂ (L, 〈., .〉) is an orthogonal family of vectors =⇒‖∑n
k=1 fk‖2 =∑n
k=1‖fk‖2.
Theorem 1.8 (Bessel inequality). Let {ek}nk=1 be a orthonormal family of vectors.Then for every f ∈ L it results:
‖f‖2 ≥n∑k=1
 〈f, ek〉 2
Proof. Let g = f −∑n
k=1 〈f, ek〉 ek. Then for every h = 1, . . . , n it results:
〈g, eh〉 = 〈f, eh〉 −n∑k=1
〈f, ek〉 〈ek, eh〉 = 〈f, eh〉 − 〈f, eh〉 = 0
Then g ⊥ eh for h = 1, . . . , n and the vectors g, 〈f, e1〉 e1, . . . , 〈f, en〉 en form anorthogonal family.
Hence from the Corollary 1.2 we have:
‖f‖2 = ‖g +n∑k=1
〈f, ek〉 ek‖2 = ‖g‖2 +n∑k=1
 〈f, ek〉 2 ≥n∑k=1
 〈f, ek〉 2.
�
Exercise 1.1. If {ek}nk=1 is an orthonormal family of vectors, then ‖f‖2 =∑nk=1  〈f, ek〉 2 ⇐⇒ f =
∑nk=1 〈f, ek〉 ek.
Theorem 1.9. If {ek}nk=1 is an orthonormal family of vectors, then for everyf ∈ L, ‖f −
∑nk=1 αkek‖ is minimized when αk = 〈f, ek〉 for k = 1, . . . , n.
Proof. ‖f−∑n
k=1 αkek‖2 = 〈f −∑n
k=1 αkek, f −∑n
k=1 αkek〉 = ‖f‖2−∑n
k=1 αk 〈f, ek〉−∑nk=1 αk 〈ek, f〉+ 〈
∑nk=1 αkek,
∑nk=1 αkek〉.
The last term which is on the right is∑n
k=1 αk2 because the ek are orthonormal.
Moreover −αk 〈f, ek〉 − αk 〈ek, f〉 + αk2 = −αk 〈f, ek〉 − αk〈f, ek〉 + αk2 = 3 −2 Re(α〈f, ek〉) + αk2 = − 〈f, ek〉 2 + αk − 〈f, ek〉 2.
So:
0 ≤ ‖f −n∑k=1
αkek‖2 = ‖f‖2 −n∑k=1
 〈f, ek〉 2 +n∑k=1
αk − 〈f, ek〉 2.
�
Remark 1.2. The prehilbertian spaces are a natural extension of the spaces IRn
since many geometric properties of IRn continue to be valid.
3 Remember that: αk − 〈f, ek〉 2 = αk2 +  〈f, ek〉 2 − 2 Re(αk〈f, ek〉).
14 1. HILBERT SPACES
2. Normed linear spaces
K will denote C or IR yet.
Def. 2.1. A vector space L on K is said NORMED LINEAR SPACE (on K) ifthere exists a function from L to IR, said NORM, satisfying the following conditions:
N1) ‖f‖ ≥ 0N2) ‖f‖ = 0⇐⇒ f = 0N3) ‖αf‖ = α‖f‖N4) ‖f + g‖ ≤ ‖f‖+ ‖g‖
From the Theorem 1.4 hence every prehilbertian space is a normed space. Butthere are normed spaces which aren’t prehilbertian, as the following example shows.
Example 2.1. Let L = C([0, 2π]). We define:
‖f‖ = sup{f(x) : x ∈ [0, 2π]}Verify that it is a norm (also said the UNIFORM NORM).However this norm isn’t induced by any inner product on L. In fact if it wasn’t
it should be valid the parallelogram law.Choosing, for example:
f(x) = max{sinx, 0} and g(x) = max{− sinx, 0}
Figure 2. f(x) = max{sinx, 0} and g(x) = max{− sinx, 0}
we have:
‖f + g‖ = ‖f − g‖ = ‖f‖ = ‖g‖ = 1
and so:
2 = ‖f + g‖2 + ‖f − g‖2 6= 2‖f‖2 + 2‖g‖2 = 4
Theorem 2.1. If (L, ‖.‖) is a normed Kspace and the norm satisfies the parallelogram law, then L becomes a prehilbertian Kspace defining the inner product〈., .〉:
〈f, g〉 =1
4[‖f + g‖2 − ‖f − g‖2], if K = IR
2. NORMED LINEAR SPACES 15
〈f, g〉 =1
4[‖f + g‖2 − ‖f − g‖2 + i‖f + ig‖2 − i‖f − ig‖2], if K = C.
Moreover: the inner product 〈., .〉 induces 4 the norm ‖.‖.
N.S.C. 2.1. so that a norm is induced by an inner product is that the normsatisfies the parallelogram law.
From now on we will denote with L a normed linear space on K.
Theorem 2.2.  ‖f‖ − ‖g‖  ≤ ‖f − g‖.
Proof.‖f‖ = ‖(f − g) + g‖ ≤ ‖f − g‖+ ‖g‖
‖g‖ = ‖(g − f) + f‖ ≤ ‖f − g‖+ ‖f‖whence the assertion. �
A normed linear space is a topological vector space with the topology induced bythe metric d(f, g) = ‖f − g‖. Prove that d is a distance on L.
Def. 2.2. The space L is said SEPARABLE if it contains a countable subset Awhich is EVERYWHERE DENSE IN L (that is A = L).
Example 2.2. Cn is separable for every n ≥ 1. In fact the set of the vectors withrational complex coordinates that is αk = βk + iγk, with βk, γk ∈ IQ is countable andeverywhere dense in Cn.
Def. 2.3. A sequence (fn)∞n=1 ⊂ L CONVERGES TO VECTOR f , said itsLIMIT limn→∞ fn = f , if:
∀ ε > 0 ∃ n = n(ε) : ∀ n ≥ n =⇒ ‖f − fn‖ < ε.
A series∑∞
k=1 gk CONVERGES TO A VECTOR g, said its SUM,∑∞
k=1 gk = g,if limn→∞
∑nk=1 gk = g.
A sequence or series that doesn’t converge is said DIVERGENT.
Remark 2.1. The following formulations for limn→∞ fn = f are equivalent:
a) ∀ ε > 0 ∃ open εsphere of f containing all vectors fn for sufficiently big n.b) limn→∞‖fn − f‖ = 0.
The following formulations for∑∞
k=1 gk = g are equivalent:
c) The sequence of the partial sums (∑n
k=1 gk)n converges to g.d) limn→∞‖g − Σ∞k=1gk‖ = 0.
Exercise 2.1. Prove that a convergent sequence determines univocally its limit.
4 The norm is said induced if ‖f‖2 = 〈f, f〉 ,∀f ∈ L (Def. 1.3).
16 1. HILBERT SPACES
Theorem 2.3. Let f ∈ L and U ⊂ L. Then f ∈ U ⇐⇒ there exists a sequence(fn)∞n=1 ⊂ U convergent to f .
Proof. If f ∈ U , then ∀ n ≥ 1 the open 1nsphere of f contains at least a vector
fn ∈ U . From ‖f − fn‖ < 1n
it follows limn→∞ fn = f .The converse: if there exists (fn)∞n=1 ⊂ U such that f = limn→∞ fn and if f 6∈ U
and for every ε > 0 it follows, from ‖f − fn‖ < ε for every n > n(ε), that fn 6= f iscontained in the εsphere of f , that is, f is an accumulation point for U and hencef ∈ U . �
Theorem 2.4. If limn→∞ fn = f , limn→∞ αn = α, limn→∞ gn = g, limn→∞ βn =β then the following properties hold:
a) limn→∞(αnfn + βngn) = αf + βgb) limn→∞‖fn‖ = ‖f‖c) If L is prehilbertian and the norm is induced by the inner product 〈., .〉, then
limn→∞
〈fn, gn〉 = 〈f, g〉 .
Proof. a): ‖(αf + βg) − (αnfn + βngn)‖ ≤ ‖αf − αnfn‖ + ‖βg − βngn‖ ≤α− αn‖f‖+ αn‖f − fn‖+ β − βn‖g‖+ βn‖g − gn‖
Since the sequences (αn)∞n=1 and (βn)∞n=1 are bounded it follows a).b):  ‖f‖ − ‖fn‖  ≤ ‖f − fn‖ from the Theorem 2.2, hence we have b).c):  〈f, g〉 − 〈fn, gn〉  ≤ 〈f, g〉 − 〈f, gn〉  +  〈f, gn〉 − 〈fn, gn〉 =  〈f, g − gn〉  +
 〈f − fn, gn〉  ≤ ‖f‖‖g − gn‖+ ‖f − fn‖‖gn‖Since limn→∞‖gn‖ = ‖g‖ it follows that the sequence (‖gn‖)∞n=1 is bounded.
Therefore we have c). �
Def. 2.4. A sequence (fn)∞n=1 ⊂ L is said CAUCHY (or FUNDAMENTAL)sequence if
∀ ε > 0 ∃ n = n(ε) : ∀ m,n ≥ n =⇒ ‖fm − fn‖ < ε.
Notation: limm,n→∞‖fm − fn‖ = 0
Exercise 2.2. Every Cauchy sequence is bounded.
Theorem 2.5. Every convergent sequence in L is a Cauchy sequence.
Proof. For exercise. �
Def. 2.5. A normed linear space is said COMPLETE if every Cauchy sequencein L converges to any vector of L.
A complete and normed linear space on K is said BANACH SPACE ON K.
Remark 2.2. From the previous definition it follows that in a Banach spacea sequence (fn)∞n=1 converges if and only if it is a Cauchy sequence; thus a series∑∞
n=1 gk converges in a Banach space if and only if the sequence of the partial sumsis a Cauchy sequence, that is if and only if limm,n→∞‖
∑nk=m gk‖ = 0.
2. NORMED LINEAR SPACES 17
Example 2.3. Let L = Cn. Prove that L is complete in the norm induced bythe inner product given in the example 1.2.
Example 2.4. Let L be the prehilbertian space of the example 1.3. The spaceL of all finite sequences of complex numbers isn’t complete in the norm induced bythe inner product.
In fact let an ={
1, 12, . . . , 1
n, 0, . . .
}for n ≥ 1.
Then for every n > m we have
‖am − an‖2 =n∑
k=m+1
1
k2.
Since the series∑∞
k=11k2
converges it follows that limm,n→∞‖am − an‖ = 0.But on the other hand, the sequence (an)∞n=1 can’t converge to an element
α = {α1, . . . , αj, 0, . . .} ∈ LIn fact for every n > j we should have ‖α− an‖2 ≥ 1
(j+1)2> 0.
Example 2.5. Let C(I) be the prehilbertian space of the example 1.4. C(I)isn’t complete. For simplicity let I[−1, 1].
For every n ≥ 1 we define:
fn(x) =
0 for −1 ≤ x ≤ 0
nx for 0 < x < 1n
1 for 1n≤ x ≤ 1.
18 1. HILBERT SPACES
. . . . . .
Figure 3. f1(x), f2(x) and . . . f8(x) . . .
Then we have:
fm(x)− fn(x) = 0 for − 1 ≤ x ≤ 0
fm(x)− fn(x) ≤ 1 for 0 ≤ x ≤ max{ 1
m,
1
n}
fm(x)− fn(x) = 0 for max{ 1
m,
1
n} ≤ x ≤ 1
In conclusion:
‖fm − fn‖2 =
∫ 1
−1
fm(x)− fn(x)2dx ≤ max{ 1
m,
1
n}
hence limm,n→∞‖fm − fn‖ = 0.If f ∈ C[−1, 1] was such that:
limn→∞‖f − fn‖2 = lim
n→∞
∫ 1
−1
f(x)− fn(x)2dx = 0
then necessarily:
f(x) =
{0 −1 ≤ x ≤ 0
1 0 < x ≤ 1.
In fact∫ bafn(x)−f(x)2dx ≤
∫ 1
−1fn(x)−f(x)2dx for every interval [a, b] ⊂ [−1, 1]
hence∫ bafn(x)− f(x)2dx→ 0. In particular∫ 0
−1
fn(x)− f(x)2dx =
∫ 0
−1
f(x)2dx→ 0
in other words: ∫ 0
−1
f(x)2dx = 0.
Since f is continuous it follows that f(x) = 0 ∀ x ∈ [−1, 0].Let 0 < ε < 1. We have: ∫ 1
ε
fn(x)− f(x)2dx→ 0.
2. NORMED LINEAR SPACES 19
But for n > 1ε
it results fn(x) = 1 for x ∈ [ε, 1], so∫ 1
ε
fn(x)− f(x)2dx =
∫ 1
ε
1− f(x)2dx for n >1
ε.
For n → ∞ it follows that:∫ 1
ε1 − f(x)2dx = 0 so f(x) = 1 on [ε, 1]. For the
arbitrariness of ε > 0 it follows that f(x) = 1 for x ∈]0, 1]. And this is impossible forthe continuity of f .
Example 2.6. Let L = C(I) of the example 2.1. We observe that a sequence(fn)∞n=1 converges to f in C(I) ⇐⇒ it converges uniformly to f in I. In fact therelation ‖f − fn‖ < ε is equivalent to f(x)− fn(x) < ε, ∀ x ∈ I.
C(I) is complete. In fact, let (fn)∞n=1 be a Cauchy sequence in C(I). For everyx ∈ I we have fn(x) − fm(x) ≤ ‖fm − fn‖. Then for every x ∈ I the sequence ofnumbers (fn(x))∞n=1 converges to a number that we will denote by f(x).
The function f : x→ f(x) defined on I is continuous and (fn)∞n=1 converges to fin the space C(I). In fact for every ε > 0 there exists an index n = n(ε) such that‖fn− fm‖ < ε for n,m > n and so fn(x)− fm(x) < ε for every x ∈ I and n,m > n.It follows that:
(2) f(x)− fn(x) < ε for every x ∈ I and n > n
that is the sequence (fn)∞n=1 of continuous functions converges uniformly to f , so f iscontinuous. From (2) we have that ‖f −fn‖ < ε for n > n and the proof is complete.
Theorem 2.6. Let L be a normed linear space on K. Then there exists a linearspace L1 on K with the norm ‖.‖1, said the completion of L, with the followingproperties:
a) L ⊂ L1
b) ‖f‖1 = ‖f‖ for every f ∈ Lc) L is everywhere dense in L1
d) L1 is complete.
If the norm ‖.‖ is induced by an inner product 〈., .〉, then the norm ‖.‖1 is inducedby an inner product 〈., .〉1 in L and 〈f, g〉 = 〈f, g〉1 for every f and g ∈ L.
Remark to the Theorem 2.6.The space L1 with the properties a) b) c) d) is univocally determined by L. This
giustifies the name the completion of L. The base idea of the proof is the same ofthat one used in the construction of the field of the real numbers beginning from thefield of the rational numbers.
Talking by intuition, to every Cauchy sequence in L that doesn’t converge to apoint of L we associate a new point (its limit point) that we add to L and the normof this one is univocally determined by the condition of continuity (Theorem 2.4 b));the same holds for the linear operations (Theorem 2.4 a)).
20 1. HILBERT SPACES
Def. 2.6. A prehilbertian space on C (or IR) which is complete with respect tothe norm induced by the inner product is said complex (or real) HILBERT SPACE.
3. The Hilbert space l2
Now we will give a Hilbert space which is the completion of prehilbertian spaceof all finite sequences of complex numbers (cfr. Example 1.3).
We consider the set l2 of all sequences to summable square of complex numbers
l2 = {a = (αk)∞k=1 : αk ∈ C for every k,Σ∞k=1αk2 <∞}
Theorem 3.1. For every a = (αk)∞k=1 ∈ l2 and b = (βk)
∞k=1 ∈ l2 we define:
a+ b = (αk + βk)∞k=1
λa = (λαk)∞k=1
Then l2 is a complex linear space.
Proof. Using the Cauchy inequality in C2 we have:
 〈(αk, βk), (1, 1)〉 2 = αk1 + βk12 ≤ (αk2 + βk2)(1 + 1) = 2(αk2 + βk2)
so:
Σ∞k=1αk + βk2 ≤ 2(Σ∞k=1αk2 + Σ∞k=1βk2) <∞.So a+ b ∈ l2.Analogously from Σ∞k=1λαk2 = λ2Σ∞k=1αk2 <∞ we conclude that λa ∈ l2.All required conditions in the definition of linear space on C are obviously verified.
�
Theorem 3.2. For every a = (αk)∞k=1 ∈ l2 and b = (βk)
∞k=1 ∈ l2 the series∑∞
k=1 αkβk converges. With the inner product defined by:
〈a, b〉 =∞∑k=1
αkβk
l2 is a prehilbertian space on C.
Proof. From 0 ≤ (αk − βk)2 we conclude:
2αkβk ≤ αk2 + βk2
∞∑k=1
αkβk ≤1
2(∞∑k=1
αk2 +∞∑k=1
βk2) <∞.
3. THE HILBERT SPACE l2 21
Hence the series∑∞
k=1 αkβk converges.Let prove easily that l2 is prehilbertian. �
Theorem 3.3. l2 is complete with respect to the norm induced by the inner product.
Proof. Let an = (αk,n)∞k=1 and let (an)∞n=1 be a Cauchy sequence in l2. From
αk,m − αk,n2 ≤∞∑k=1
αk,m − αk,n2 = ‖am − an‖2
we have that for every k ≥ 1 the sequence (αk,n)∞n=1 is a Cauchy sequence in Cand so it converges to a number αk ∈ C.
Let a = (αk)∞k=1. For every ε > 0 we have:
‖am − an‖2 =∞∑k=1
αk,m − αk,n2 < ε2 for every m,n > n(ε)
and so for every fixed index h ≥ 1 we have:
h∑k=1
αk,m − αk,n2 < ε2 for every m,n > n(ε).
For m→ +∞ we obtain:
h∑k=1
αk − αk,n2 ≤ ε2 for every n > n(ε) and h ≥ 1
and for h→∞ we have:
(3)∞∑k=1
αk − αk,n2 ≤ ε2 for every n > n(ε)
the sequence a − an = (αk − αk,n)∞k=1 so it is in l2 and since l2 is a linear space alsothe sequence a = (a− an) + an is in l2. Hence from (3) we conclude that
‖a− an‖ ≤ ε for every n > n(ε), limn→∞
an = a.
Thus every Cauchy sequence in l2 converges to an element of l2. �
Theorem 3.4. l2 contains a countable orthonormal family.
Proof. Let en = (δn,k)∞k=1 for n ≥ 1. Then 〈em, en〉 = δm,n for every 1 ≤ m ≤ n.
�
Theorem 3.5. l2 is separable.
22 1. HILBERT SPACES
Proof. Let l′ be the set of all finite sequences of complex rational numbers, thatis:
l′ = {a′ = (α′k)∞k=1 : α′k ∈ C,Reα′k, Imα′k ∈ IQ, for k ≥ 1, α′k = 0 for k > n(a′)}
Obviously l′ ⊂ l2 is countable. In fact the set of all complex rational numbers hasthe same cardinality of the set of all pairs of rational numbers and so it’s countable.For every n ≥ 1, the subset of l′ constituted by those elements a′ = (α′k)
∞k=1 ∈ l′ such
that α′k = 0 for every k ≥ n is countable. The set l′ is the countable union of allthese countable subsets for n ≥ 1 and so it’s countable.
The set l′ is everywhere dense in l2. In fact let a = (αk)∞k=1 ∈ l2 and let ε > 0.
We choose n ≥ 1 such that Σ∞k=n+1αk2 < 12ε2 and a′ = (α′k)
∞k=1 ∈ l′ such that α′k = 0
for k ≥ n+ 1 and αk−α′k2 < ε2
2nfor 1 ≤ k ≤ n (this is possible because the complex
rational numbers are everywhere dense in C).Then we have:
‖a− a′‖2 =n∑k=1
αk − α′k2 +∞∑
k=n+1
αk − α′k2 < nε2
2n+ε2
2.
�
Remark 3.1. Moreover, the proof of the Theorem 3.5 proves that the prehilbertian space of all finite sequences of complex numbers (cfr. Example 1.3) iseverywhere dense in l2. By virtue of the Theorem 2.6 it follows that l2 is its completion.
Remark 3.2. Modifying opportunely l2 it is possible to give an example of nonseparable Hilbert space. Instead of the sequences (αk)
∞k=1 of complex numbers we
consider the family {αx}x∈IR of complex numbers. A such family a = {αx}x∈IR canbe visualized as a function of IR if we define a(x) = αx.
Let L be the set of all functions a of IR such that:
a) The functon a is zero in IR except for a set of points (indexes) that is countable and that can depende from a.
b) The sum of the squares on the absolute values of the values of a in thesepoints is finite. That is:∑
x∈IR
a(x)2 =∑x∈IR
αx2 <∞.
We punctually define:
(a+ b)(x) = a(x) + b(x) = {αx + βx}x∈IR
(λa)(x) = λa(x) = λαx.
4. THE HILBERT SPACE L2 23
Then reasoning as in the proofs of the Theorems 3.1,3.2,3.3 prove that L is acomplex linear space, that
〈a, b〉 =∑x∈IR
a(x)b(x) =∑x∈IR
αxβx
properly defines an inner product in L and that L is a Hilbert space with respectto this inner product.
We denote by ey, y ∈ IR the function defined on IR by ey(x) = δy,x. The family{ey : y ∈ IR} ⊂ L that has the cardinality of the continuous is orthonormal. By
virtue of the Corollary 1.1 it results ‖ey − ez‖ =√
2 for y 6= z. Let By be the open
sphere with centre ey and radius 12
√2. For z 6= y the spheres Bz and By are disjoint.
In fact if a ∈ Bz ∩ By we had ‖ez − ey‖ ≤ ‖ez − a‖ + ‖a − ey‖ < 12
√2 + 1
2
√2 =√
2and this is false.
Now, let L′ be any subset of everywhere dense L in L. Then every sphere By,y ∈ IR, must contain at least an element of L′. Since the sphere By are pairwisedisjoint, L′ must contain at least so many elements as the points y of IR. So everysubset of everywhere dense L in L has at least the cardinality of the continuous andso L is certainly nonseparable.
Exercise 3.1. Let U = {a = (αk)∞k=1 : αk ∈ C, αk < 1
k, k ≥ 1}.
Prove that:
a) U ⊂ l2b) Every sequence (an)∞n=1 ⊂ U contains a convergent subsequencec) For n ≥ 1 let en = (δn,k)
∞k=1; then every subsequence of the sequence (en)∞n=1
doesn’t converge.
4. The Hilbert space L2
Now, we will give a Hilbert space that is the completion of the prehilbertianspace of the continuous functions C[a, b] (cfr. example 1.4).
We denote by M [a, b] the set of all functions to measurable complex values on[a, b] and by L1[a, b] the subset of all Lebesgue integrable functions on [a, b]. LetL2[a, b] be the subset of all functions to complex values and to summable square on[a, b], that is:
L2[a, b] = {f ∈M [a, b] :
∫ b
a
f(x)2dx <∞}.
Note: Two functions which coincide a.e. will be identified.
Theorem 4.1. The set L2[a, b] is a complex linear space under the addition andthe multiplication for scalars.
24 1. HILBERT SPACES
Proof. If f ∈ L2, g ∈ L2 and λ ∈ C the functions f + g, λf ∈M [a, b]. Moreover:
f(x) + g(x)2 ≤ 2(f(x)2 + g(x)2) 5
∫ b
a
f(x) + g(x)2dx ≤ 2
{∫ b
a
f(x)2dx+
∫ b
a
g(x)2dx}<∞
∫ b
a
λf(x)2dx = λ2∫ b
a
f(x)2dx <∞
In conclusion f + g, λf ∈ L2[a, b]. Moreover L2[a, b] verifies the mentioned conditions in the definition of complex linear space. �
Theorem 4.2. If f, g ∈ L2[a, b], the function fg is integrable on [a, b] with theinner product defined by:
〈f, g〉 =
∫ b
a
f(x)g(x)dx,
L2[a, b] is a complex prehilbertian space.
Proof. The function fg is Lebesgue measurable on [a, b]. Moreover:
2f(x)g(x) ≤ f(x)2 + g(x)2
∫ b
a
f(x)g(x)dx ≤ 1
2
{∫ b
a
f(x)2dx+
∫ b
a
g(x)2dx}<∞.
So fg ∈ L1[a, b].All properties of the inner product are easily verified. �
Corollary 4.1. L2[a, b] ⊂ L1[a, b].
Proof. Let f ∈ L2[a, b]. From the Holder inequality it follows:
∫ b
a
f(x)1dx ≤(∫ b
a
f(x)2) 1
2
(b− a)12 <∞.
So L2[a, b] ⊂ L1[a, b]. �
Theorem 4.3. L2[a, b] is complete with respect to the norm induced by the innerproduct.
Theorem 4.4. L2[a, b] contains a countable orthonormal family.
5 0 ≤ (f(x) − g(x))2 = f(x)2 + g(x)2 − 2f(x)g(x), f(x) + g(x)2 ≤ (f(x)+ g(x))2 =f(x)2 + g(x)2 + 2f(x)g(x) ≤ 2(f(x)2 + g(x)2).
4. THE HILBERT SPACE L2 25
Proof. The family {ek}+∞k=−∞ ⊂ C[a, b] defined by:
ek(x) =1√b− a
e2πik x−ab−a , x ∈ [a, b]
has the required properties. �
Theorem 4.5. L2[a, b] is separable.
Proof. (sketch) The countable and everywhere dense set in L2[a, b] is the set L′
of all linear combinations of the functions ek with rational coefficients. �
Theorem 4.6. L2[a, b] is the completion of the prehilbertian space of the continuous functions C[a, b] (cfr. example 1.4).
Remark 4.1. If we alone consider real functions and real scalars we obtain thereal Hilbert space Lr2[a, b].
Using the opportune shrewdnesses we can prove the theorems already proved forL2[a, b] also for Lr2[a, b].
Remark 4.2. The spaces L2[a,+∞[, L2]−∞, b], L2]−∞,+∞[ are also the Hilbertspaces. Using the opportune shrewdnesses we can prove the theorems already provedfor L2[a, b] also for L2[a,+∞[, L2]−∞, b], L2]−∞,+∞[. Excepted the Corollary 4.1which is valid only for finite a, b.
Analogously for Lr2[a,+∞[, Lr2]−∞, b], Lr2]−∞,+∞[.
CHAPTER 2
GEOMETRY OF THE HILBERT SPACES
1. Subspaces
In the whole chapter we will denote by H a fixed Hilbert space.
Def. 1.1. A nonempty subset M of H is said LINEAR MANIFOLD if:
f + g ∈M, for every f, g ∈Mλf ∈M, for every f ∈M and λ ∈ K
A closed linear manifold is said SUBSPACE.A SUBSPACE is said PROPER if it doesn’t coincide to H.
It’s known that in IRn and Cn every linear manifold is closed. In this case thenotions of linear manifold and of subspace coincide (this holds for finite dimensionalspaces, too, cfr. Theorem 1.4 Banach spaces). In general this isn’t true in all Hilbertspaces.
Example 1.1. Let H = l2 and let M be the subset of all finite sequences of l2.Obviously, M is a linear manifold, but since M is everywhere dense in l2 it resultsM = l2 6= M (let see Remark 3.1), so M isn’t closed.
Example 1.2. Let H = l2 and let M be the subset of every a = (αk)∞k=1 ∈ l2
with α1 = 0. Obviously, M is a linear manifold.It is easy to prove that M is a subspace.In fact if b = (βk)
∞k=1 ∈ l2 is an accumulation point for M then for every ε > 0
there exists an element a ∈M such that: ‖b−a‖ < ε. Since β1 = β1−α1 ≤ ‖b−a‖it follows that β1 = 0 and so b ∈M .
Example 1.3. Let H = L2[a, b] and let M = C[a, b] ⊂ L2[a, b] as in the example 1.4. M is again a linear manifold, but it isn’t a subspace: in fact M = H 6= M(cfr. Theorem 4.6)
Example 1.4. Let H = L2[a, b] (−∞ ≤ a < b ≤ +∞) and let Y be a subset ofthe Lebesgue measurable interval ]a, b[.
We define M = {f ∈ L2[a, b] : f(x) = 0 for almost every x ∈ Y } (remember thattwo functions which are a.e. equal in [a, b] are identified). Obviously M is a linearmanifold. We prove that M is a subspace.
27
28 2. GEOMETRY OF THE HILBERT SPACES
If Y is a set of Lebesgue zero measure we have M = L2[a, b] and so there isnothing to prove. We suppose then that Y has Lebesgue positive measure. If g ∈Mthen for every ε > 0 there exists f ∈M such that:
‖g − f‖2 =
∫ b
a
g(x)− f(x)2dx < ε.
This one implies:∫Y
g(x)2dx =
∫Y
g(x)− f(x)2dx ≤∫ b
a
g(x)− f(x)2dx < ε.
So∫Yg(x)2dx = 0 for the arbitrariness of ε, whence g(x) = 0 a.e. in Y that is
g ∈M and so M is closed.
Remark 1.1. All the whole space H and the subset 0 constituted from the onlyelement 0 are obviously the subspaces. They are said BANAL.
All the other subspaces are said NONBANAL SUBSPACES.
Remark 1.2. All that one which we will say in this paragraph it can be appliedto anyone Banach space L.
The notion of subspace is important for various reasons. The one is in the factthat a subspace M of a Hilbert space is complete. In fact if (fn)∞n=1 ⊂M is a Cauchysequence then it converges to any element f ∈ H that, since M is closed, it must bean element of M .
Moreover, as we will prove, the closure of a linear manifold is a subspace. Since Mcan be contained in different subspaces, we can say that M is the smallest subspacecontaining M .
More generally for every given subset A of H there exists an unique subspace, thesmallest subspace containing A.
Theorem 1.1. Let M be a linear manifold of H. Then M is a subspace.
Proof. We can prove that if f, g ∈M and λ ∈ K then:
f + g ∈Mλf ∈M.
Let ε > 0. We choose f1, g1 ∈M such that:
‖f − f1‖ <1
2ε and ‖g − g1‖ <
1
2ε.
Then f1 + g1 ∈ M and ‖(f + g)− (f1 + g1)‖ ≤ ‖f − f1‖+ ‖g − g1‖ < ε. Thus iff + g 6∈M then f + g is an accumulation point for M and so f + g ∈M .
Analogously let prove that λf ∈M . �
1. SUBSPACES 29
Theorem 1.2. Let {Mσ}σ∈Σ a nonempty family of linear manifolds. Then theset M = ∩σ∈ΣMσ is a linear manifold.
If {Mσ}σ∈Σ is a family of subspaces, then M is a subspace.
Proof. From the definition of intersection of linear manifolds it follows that iff, g ∈M = ∩σ∈ΣMσ, λ ∈ K =⇒ f + g ∈M , λf ∈M .
since the intersection of subspaces is nonempty, M is closed. �
Theorem 1.3. Let A be a subset of H. Then there exists a unique subspace Mwith following properties:
a) A ⊂Mb) If N ⊃ A is a subspace then N ⊇M .
Proof. We consider the family {Mσ}σ∈Σ of all subspaces containing A. Suchfamily is nonempty because H belongs to it. Let M = ∩σ∈ΣMσ. Then from theTheorem 1.2 M is a subspace that contains A and M ⊂Mσ for every σ ∈ Σ. �
Def. 1.2. If A is a subset of H, then the subspace M associated to A throughthe Theorem 1.3 is said SUBSPACE GENERATED BY A (or EXTENSION OF A)and we will use the notation:
M =∨
A
Theorem 1.4. If A is a subset of H, then:∨A =
{n∑k=1
αkfk : fk ∈ A,αk ∈ K, for 1 ≤ k ≤ n, n ≥ 1
}.
Proof. The set of all finite linear combinations of elements of A is obviously alinear manifold contained in
∨A. The closure of this set is a subspace contained in∨
A from the Theorem 1.1. But from the property b) of Theorem 1.3 it must coincideto∨A. �
If M1,M2, . . . are subspaces, we denote by M1
∨M2 and
∨∞k=1Mk the SUB
SPACES GENERATED BY M1 ∪M2 and ∪∞k=1Mk.
Def. 1.3. If M1 and M2 are linear manifolds then the set:
M1 +M2 = {f1 + f2 : f1 ∈M1, f2 ∈M2}is said SUM (VECTOR) OF M1 AND M2.If (Mk)
∞k=1 is a sequence of linear manifolds, then the set:
∞∑k=1
Mk = {f : f ∈ H, f =∞∑k=1
fk, fk ∈Mk ∀k}
is said SUM (VECTOR) OF THE LINEAR MANIFOLDS Mk.
30 2. GEOMETRY OF THE HILBERT SPACES
In the other words,∑∞
k=1Mk is the set of the sums of all convergent series∑∞
k=1 fkwith fk ∈ Mk for every k. Obviously this set is a linear manifold, as also the setM1 + M2. The sum vector of two linear manifold can be considered as a particularcase of a sum vector of a sequence of linear manifolds, letting Mk = 0 for k > 2.Analogously we can write:
n∑k=1
Mk = M1 +M2 + . . .+Mn
letting Mk = 0 for k > n.
There exists a simple relation between sum vector and generated subspace:
Theorem 1.5. If (Mk)∞k=1 is a sequence of linear manifolds, then:
M1
∨M2 = M1 +M2
∞∨k=1
Mk =∞∑k=1
Mk
Exercise 1.1. Let H = l2, we define:
M1 = {a = (αk)∞k=1 ∈ l2 : α2k = 0, k = 1, 2, . . .}
M2 = {b = (βk)∞k=1 ∈ l2 : β2k−1 = δk cos
1
k, β2k = δk sin
1
k, k = 1, 2, . . .}
and let c = (γk)∞k=1 where γ2k−1 = 0, γ2k = sin 1
k, for k = 1, 2, . . ..
Prove the following affirmations:
a) M1 and M2 are two subspaces.b) M1
∨M2 = l2.
c) c ∈ l2.d) c 6∈M1 +M2.
2. Orthogonal subspaces
A way to obtain a subspace, proved in §1, is to start from an arbitrary subset Aof H and to consider the subspace generated by A,
∨A.
An other way, as we will say, is to consider the set of all orthogonal vectors toevery vector of A.
2. ORTHOGONAL SUBSPACES 31
Def. 2.1. A VECTOR g is said ORTHOGONAL TO A SUBSET A ⊂ H, g ⊥ A,if g ⊥ f for every f ∈ A.
Two SUBSETS A and B OF H are MUTUALLY said ORTHOGONAL, A ⊥ B,if f ⊥ g for every f ∈ A e g ∈ B.
The set: A⊥ = {g ∈ H : g ⊥ A} is said ORTHOGONAL COMPLEMENT OF A(IN H).
If N and M are subspaces and M ⊂ N , the set
(N −M = 1){g ∈ N : g ⊥M} = N ∩M⊥
is said the ORTHOGONAL COMPLEMENT OF M in N .
Remark 2.1. If M is a subspace, we can write M⊥ = H −M 2.
Remark 2.2. If A and B are two subsets of H, then A ⊂ B =⇒ A⊥ ⊃ B⊥.
Theorem 2.1. If A is a subset of H, then A⊥ is a subspace and A ∩ A⊥ is thesubspace 0 = {0} or it is empty (if 0 6∈ A).
Proof. Let g1, g2 ∈ A⊥. Since 〈f, g1〉 = 〈f, g2〉 = 0 for every f ∈ A it follows that:
〈f, α1g1 + α2g2〉 = α1 〈f, g1〉+ α2 〈f, g2〉 = 0, ∀f ∈ Aand so α1g1 + α2g2 ∈ A⊥.Hence A⊥ is a linear manifold.Let g ∈ A⊥ and let g = limn→∞ gn, with gn ∈ A⊥ for n ≥ 1. Then for every
f ∈ A, it results:
〈f, g〉 = limn→∞
〈f, gn〉 = 0, and so g ∈ A⊥.
Thus A⊥ is closed and it is a subspace. If A ∩ A⊥ 6= ∅ and if f ∈ A ∩ A⊥ =⇒f ⊥ f that is 〈f, f〉 = 0 and so f = 0. �
Theorem 2.2. If M and N are two orthogonal subspaces, then:
M∨
N = M +N.
Proof. In virtue by the Theorem 1.5 it is sufficient to prove that M +N is closed.We suppose that f ∈ M +N f = limn→∞ fn, where fn = gn + hn with gn ∈ M andhn ∈ N for n ≥ 1.
Since gn ⊥ hn from the Phytagorean theorem it follows:
‖fm − fn‖2 = ‖gm − gn‖2 + ‖hm − hn‖2
From the convergence of fn to f it follows that:
1 Let see Remark 2.3.2 Let see Theorem 2.8.
32 2. GEOMETRY OF THE HILBERT SPACES
limm,n→∞
‖fm − fn‖ = 0 and so limm,n→∞
‖gm − gn‖ = 0 and limm,n→∞
‖hm − hn‖ = 0.
The sequences (gn)∞n=1 and (hn)∞n=1 are the Cauchy sequences and they respectivelyconverge in H to g and h. Since M and N are closed it must be that g ∈ M andh ∈ N . Therefore
f = limn→∞
fn = limn→∞
(gn + hn) = g + h ∈M +N.
�
Theorem 2.3. Let {gk}∞k=1 be an orthogonal family of vectors.
a) The series∑∞
k=1 gk converges ⇐⇒∑∞
k=1‖gk‖2 <∞b) If the series
∑∞k=1 gk = f converges then
∑∞k=1‖gk‖2 = ‖f‖2
Proof. We suppose that∑∞
k=1‖gk‖2 <∞. Then:
limm,n→∞
‖n∑
k=m
gk‖2 = limm,n→∞
n∑k=m
‖gk‖2 = 0
and from the completeness of H the series∑∞
k=1 gk converges.The converse if
∑∞k=1 gk = f it results:
‖f‖2 = 〈f, f〉 =
⟨limn→∞
n∑k=1
gk, limm→∞
m∑h=1
gh
⟩= lim
n→∞
n∑k=1
〈gk, gk〉 =∞∑k=1
‖gk‖2.
�
Theorem 2.4. If (Mk)∞k=1 is a sequence of mutually orthogonal subspaces, then:
∞∨k=1
Mk =∞∑k=1
Mk.
Proof. It’s sufficient to prove that∑∞
k=1Mk is closed. We suppose so that f ∈∑∞k=1 Mk, f = limn→∞ fn and fn =
∑∞k=1 gn,k where gn,k ∈ Mk for every n ≥ 1 and
k ≥ 1.Reasoning analogously to what we made in the proof of the Theorem 2.2, we can
write:
(4) ‖fm − fn‖2 =∞∑k=1
‖gm,k − gn,k‖2 < ε2 per m ≥ n(ε) and n ≥ n(ε).
So:
2. ORTHOGONAL SUBSPACES 33
limm,n→∞
‖gm,k − gn,k‖2 = 0 for k ≥ 1.
Therefore there exist the limits:
gk = limn→∞
gn,k ∈Mk for k ≥ 1.
From (4) we conclude that:
h∑k=1
‖gm,k − gn,k‖2 < ε2 for m ≥ n(ε), n ≥ n(ε), for h ≥ 1.
(5)∞∑k=1
‖gk − gn,k‖2 ≤ ε2 for n ≥ n(ε).
Hence from the Theorem 2.3 a) the series∑∞
k=1(gk−gn,k) converges and thereforealso the following series converges:
∞∑k=1
(gk − gn,k) +∞∑k=1
gn,k =∞∑k=1
gk
Moreover, (5) proves that the sum of the last series is the limit of the sequence(fn)∞n=1:
f = limn→∞
fn =∞∑n=1
gk ∈Mk.
�
Theorem 2.5. Let (Mk)∞k=1 a sequence of mutually orthogonal subspaces. Then
for every f ∈∨∞k=1 Mk there exists one unique vector fk ∈ Mk for every k ≥ 1 such
that f =∑∞
k=1 fk.
Proof. We suppose that f =∑∞
k=1 f′k =
∑∞k=1 f
′′k with f ′k, f
′′k ∈ Mk for every
k ≥ 1. Then:∑∞
k=1(f ′k − f ′′k ) = 0 hence from the Theorem 2.3 b) it follows that∑∞k=1 ‖f ′k − f ′′k ‖2 = 0 therefore f ′k = f ′′k for every k ≥ 1. �
Corollary 2.1. If M1 and M2 are two orthogonal subspaces, for every vectorf = M1
∨M2 = M1 + M2 the decomposition of f in the f = f1 + f2 with f1 ∈ M1,
f2 ∈M2 is unique.
Theorem 2.6. Let M a subspace and let f ∈ H. If δ = inf{‖f − g‖ : g ∈ M}then there exists a unique vector PMf ∈ M , SAID PROJECTION OF f ON M ,such that:
‖f − PMf‖ = δ.
34 2. GEOMETRY OF THE HILBERT SPACES
Proof. Let (gn)∞n=1 ⊂M be a sequence of vectors ofM such that limn→∞‖f−gn‖ =δ. Applying the parallelogram law to the vectors (f − gm) and (f − gn) it results:
‖2f − (gm + gn)‖2 + ‖gm − gn‖2 = 2‖f − gm‖2 + 2‖f − gn‖2
and that is:
‖gm − gn‖2 = 2‖f − gm‖2 + 2‖f − gn‖2 − 4‖f − 1
2(gm + gn)‖2.
Since 12(gm + gn) ∈M it necessarily follows that ‖f − 1
2(gm + gn)‖ ≥ δ and so:
‖gm − gn‖2 ≤ 2‖f − gm‖2 + 2‖f − gn‖2 − 4δ2.
For m,n → ∞ the second member of the inequality tends to 0. Therefore in Mthere exists limn→∞ gn = PMf and it results ‖f − PMf‖ = limn→∞‖f − gn‖ = δ.
In the end we suppose that f1, f2 ∈M are such that:
‖f − f1‖ = ‖f − f2‖ = δ.
Applying the parallelogram law to (f − f1) and (f − f2) it results:
‖2f − (f1 + f2)‖2 + ‖f1 − f2‖2 = 2‖f − f1‖2 + 2‖f − f2‖2
and that is
‖f1 − f2‖2 = 4δ2 − 4‖f − 1
2(f1 + f2)‖2 ≤ 0
in that 12(f1 + f2) ∈M and so ‖f − 1
2(f1 + f2)‖ ≥ δ. �
Theorem 2.7. Let M be a subspace and let f ∈ H. We indicate by PMf ∈ Mthe projection of f on M . Then (f − PMf) ⊥M .
H = IR3
M = IR2
0 
6
��
���
HHHHHH
HHjPMf
��������3f
‖f − PMf‖ = δ
f − PMf orthogonal to M
�
Figure 1. f − PMf is orthogonal to M
2. ORTHOGONAL SUBSPACES 35
Proof. Let f0 = f − PMf then from the Theorem 2.6 we have ‖f0‖ = δ. Forevery g ∈ M and for every α ∈ K we have PMf + αg ∈ M and so δ2 = ‖f0‖2 ≤‖f − PMf − αg‖2 = ‖f0 − αg‖2 = ‖f0‖2 − α 〈g, f0〉 − α 〈f0, g〉+ α2‖g‖2,
0 ≤ −α 〈g, f0〉 − α 〈f0, g〉+ α2‖g‖2
We suppose that there exists g ∈ M such that 〈f0, g〉 6= 0 (that implies g 6= 0).
Choosing α = 〈f0,g〉‖g‖2 we will obtain
0 ≤ −2 〈f0, g〉 2
‖g‖2+ 〈f0, g〉 2
‖g‖2= − 〈f0, g〉 2
‖g‖2
which is a contradiction. �
Corollary 2.2. Let M be a linear manifold contained in a subspace N . ThenM 6= N ⇐⇒ there exists a vector f ∈ N different from 0 and orthogonal to M .
Proof. If M 6= N , then we take anyone vector f , different from zero of N butnot of M . Hence the vector f0 = f − PMf has all required properties from theTheorem 2.7.
The converse, if M = N we indicate by f a vector of orthogonal N to M . Wewill prove that f must be necessarily 0. In fact f = limn→∞ fn, fn ∈ M for n ≥ 1.Therefore
〈f, f〉 = limn→∞
〈f, fn〉 = 0.
�
Corollary 2.3. Let M be a linear manifold. Then:
M = H ⇐⇒M⊥ = 0.
Theorem 2.8 (Projection theorem). If M is a subspace, then H = M +M⊥.
Proof. For every f ∈ H we can write f = PMf + (f − PMf) where PMf ∈ Mfrom the Theorem 2.6 and (f − PMf) ∈M⊥ from the Theorem 2.7. �
From the corollary of the Theorem 2.5 we can write the projection theorem as itfollows:
Theorem 2.9. If M is a subspace, then every vector f ∈ H can be decomposedin the sum f = f1 + f2 where f1(= PMf) ∈M and f2(= PM⊥f) ∈M⊥.
Remark 2.3. If in the Theorem 2.8 to place of H we consider a subspace N ⊃M ,then the set M⊥ = {f ∈ H : f ⊥M} must be substituted with N ∩M⊥ = {f ∈ N :f ⊥M}. The following corollary giustifies the notation N ∩M⊥ = N −M .
36 2. GEOMETRY OF THE HILBERT SPACES
Corollary 2.4. If N and M are two subspaces with M ⊂ N , then:
N = M + (N ∩M⊥) = M + (N −M).
Example 2.1. Let H = L2[a, b] (−∞ ≤ a < b ≤ +∞) and let Y be a Lebesguemeasurable subset of ]a, b[. For every f ∈ H we define the functions f1, f2 ∈M [a, b]
f1(x) =
{0 if x ∈ Y
f(x) if x ∈]a, b[\Y
f2(x) =
{f(x) if x ∈ Y
0 if x ∈]a, b[\Y.From
∫ bafk(x)2dx ≤
∫ baf(x)2dx <∞ for k = 1, 2 we conclude that fk ∈ L2[a, b],
k = 1, 2. Moreover it results:
f1 ∈M1 = {g ∈ L2[a, b] : g(x) = 0 a.e. in Y }
f2 ∈M2 = {g ∈ L2[a, b] : g(x) = 0 a.e. in ]a, b[\Y }
f = f1 + f2
〈f1, f2〉 =
∫ b
a
f1(x)f2(x)dx = 0
Since the decomposition of anyone vector f in the sum of a vector of M⊥k and of
a vector of Mk is unique from the Theorem 2.9, we conclude that:
fk = PMkf for k = 1, 2 and M2 = M⊥
1
Later on we will write A⊥⊥ in the place of (A⊥)⊥.
Theorem 2.10. Let A be a subset of H. Then∨A = A⊥⊥ and consequently
A⊥ = A⊥⊥⊥. In particular: A is a subspace ⇐⇒ A = A⊥⊥.
Proof. Let f ∈∨A the limit of convergent sequence (fn)n where fn is a finite
linear combination of vectors of A (cfr. Theorem 1.4). Then for every g ∈ A⊥ wehave 〈f, g〉 = limn→∞ 〈fn, g〉 = 0 and so f ⊥ g. This one implies that
∨A ⊥ A⊥
and∨A ⊂ A⊥⊥. If
∨A 6= A⊥⊥ then from the Corollary 2.2 there should exist a
vector different from zero f ∈ A⊥⊥ ∩ A⊥. This one, however, is impossible from theTheorem 2.1.
The second part of the theorem is obtained substituting A with A⊥. �
Exercise 2.1. Let H = l2. For every n ≥ 1 let en = (δn,k)∞k=1 ∈ l2 and let
A = (e2n−1 + e2n)∞n=1.
a) Identify∨A and A⊥ in l2.
3. BASES 37
b) a = (αk)∞k=1 ∈ l2 then P∨
A a = (βk)∞k=1, where β2n−1 = β2n = 1
2(α2n−1 + αn)
for n ≥ 1; PA⊥ a = (γ)∞k=1 with γ2n−1 = −γ2n = 12(α2n−1 − α2n) for n ≥ 1.
3. Bases
Def. 3.1. Let M be a subspace of a Hilbert space H. An orthonormal FAMILY{eσ}σ∈Σ ⊂ M is said MAXIMAL in M if the only orthogonal vector to every eσ,σ ∈ Σ, in M , is the vector 0.
A maximal orthonormal family in M is said a BASE of M .
We remember that an orthonormal family is linearly independent (cfr. Theorem 1.6).
Example 3.1. Let H = Cn. Then every nple of orthonormal vectors is a base.
Example 3.2. Let H = l2. For every integer k let ek = (0, . . . , 0, 1, 0, . . .)where the kth component is 1 and the others are 0. Then {ek}∞k=1 is a base, saidstandard base of l2. In fact if 〈f, ek〉 = 0 for every k ≥ 1 then all components of fmust be 0.
Example 3.3. Let H = L2[0, 1]. For every integer k let ek = e2πikx. Then{ek}+∞
k=−∞ is an orthonormal family (cfr. Example 1.5). We suppose that for anyf ∈ H let 〈f, ek〉 = 0 for every integer k. We remember that the set L′ of all finitelinear combinations
∑mk=−m α
′kek with complex rational coefficients α′k is everywhere
dense (cfr. Theorem 4.5). Obviously 〈f, g〉 = 0 for every vector g ∈ L′. If f 6= 0 weshould choose a vector g ∈ L′ such that:
‖f − g‖ < ‖f‖.This one obviously implies:
‖f‖2 = 〈f, f〉 = 〈f, f〉 − 〈f, g〉 = 〈f, f − g〉 ≤ ‖f‖‖f − g‖ < ‖f‖2
and this is a contradiction.So {ek}+∞
k=−∞ is a base.
Example 3.4. Let L2[0, 1] and let e0 = 1, fk(x) =√
2 cos 2πkx, gk(x) =√
2 sin 2πkxfor k = 1, 2, . . .. Prove that the family F = {e0} ∪ {fk}∞k=1 ∪ {gk}∞k=1 is orthonormal.
This one follows by the fact that fk = ek+e−k√2
and gk = ek−e−k
i√
2for k ≥ 1.
If f is an orthogonal vector to e0 and to fk and gk ∀ k ≥ 1 then f ⊥ ek for everyinteger k. Since the family {ek}+∞
k=−∞ is a base, we conclude that f = 0. ConsequentlyF is a base.
Does every Hilbert space admit a base? We will enunciate that for every separableHilbert space the answer is yes.
38 2. GEOMETRY OF THE HILBERT SPACES
Theorem 3.1 (GramSchmidt orthogonalization). Let F = {fk}χk=1 be a countable family of linearly independent vectors. Then there exists an orthogonal familyG = {gk}χk=1 (with the same cardinality of F) such that gk 6= 0 and gk is a linearcombination of f1, . . . , fk for every k.
Proof. We will build the family G by induction. Let g1 = f1 (for hypothesisf1 6= 0). We suppose that g1, . . . , gk−1 are mutually orthogonal nonzero vectorsverifying the properties of the theorem. We define:
(6) gk = fk −k−1∑h=1
〈fk, gh〉‖gh‖2
gh.
It results, for 1 ≤ n ≤ k − 1:
〈gk, gn〉 = 〈fk, gn〉 −k−1∑h=1
〈fk, gh〉‖gh‖2
〈gh, gn〉 = 〈fk, gn〉 −〈fk, gn〉‖gn‖2
〈gn, gn〉 = 0.
Moreover gk is a linear combination of f1, . . . , fk. Since f1, . . . , fk are linearlyindependent then (6) implies gk 6= 0. �
Corollary 3.1. Let F = {fk}χk=1 and G = {gk}χk=1 as in the Theorem 3.1. Thenthe following affirmations hold:
a) fk is a linear combination of g1, . . . , gk for 1 ≤ k ≤ χb)∨{fk}χk=1 =
∨{gk}χk=1
c) The family {ek = gk‖gk‖}χk=1 is an orthonormal family verifying the Theo
rem 3.1d) If {hk}χk=1 is an other orthogonal family of nonzero vectors verifying the
Theorem 3.1 then hk = αkgk and αk 6= 0 for 1 ≤ k ≤ χ.
Theorem 3.2. A Hilbert space H is separable ⇐⇒ it has a countable (finite orinfinite) base.
Theorem 3.3. Let {ek}χk=1 be an orthonormal family of H. The following affirmations are equivalent:
a) {ek}χk=1 is a baseb) f ⊥ ek for every k ≥ 1 =⇒ f = 0c) H =
∨{ek}χk=1
d) f =∑χ
k=1 〈f, ek〉 ek for every f ∈ H (Fourier series)
e) 〈f, g〉 =∑χ
k=1 〈f, ek〉 〈g, ek〉 for every f, g ∈ H (Parseval identity)f) ‖f‖2 =
∑χk=1  〈f, ek〉 2 for every f ∈ H (Parseval identity).
Remark 3.1. In d) the vector f is represented in a Fourier series with respectto the base {ek}χk=1. In this representation 〈f, ek〉 is said the Fourier coefficientcorresponding to ek. Though the affirmation f) is said Parseval identity, this name is
3. BASES 39
sometimes assigned to e), too. In fact, only in appearance, this affirmation is moregeneral than f) both the affirmations are equivalent from Theorem 3.3.
Remark 3.2. All affirmations of the Theorem 3.3 remain true even if the indexk moves from 1 to χ.
Example 3.5. Let H = L2[0, 1] and let consider the base of the example 3.4. Forevery real function f ∈ L2[0, 1] we define:
α0 =
∫ 1
0
f(x)dx, αk =
∫ 1
0
f(x) cos 2kπxdx, βk =
∫ 1
0
f(x) sin 2kπxdx,
for k = 1, 2, . . ..
In the notations of the example 3.4 we have:
α0 = 〈f, e0〉 , αk =1√2〈f, fk〉 , βk =
1√2〈f, gk〉 , k = 1, 2, . . . .
From the Parseval identity we obtain:∫ 1
0
f 2(x)dx = α20 + 2Σ∞k=1(α2
k + β2k)
The following two corollaries of the Theorem 3.3 are two of the different versionsof the so called RieszFisher theorem.
Corollary 3.2. If H is a separable complex (real) Hilbert space and if {ek}χk=1
is a base, then:
H = {χ∑k=1
αkek : αk ∈ C (o αk ∈ IR) for k ≥ 1,Σχk=1αk
2 < χ}.
Proof. If∑χ
k=1 αk2 < χ then from the Theorem 2.3 a) the series (or the finite sum)
∑χk=1 αkek converges. The converse every vector of H admits a Fourier
representation having the properties d) and f) from the Theorem 3.3. �
Corollary 3.3. Let {ek}χk=1 be a base of a separable complex (or real) Hilbertspace and let {αk}χk=1 be a sequence in C (or in IR) such that
∑χk=1 αk2 < +χ. Then
there exists one unique vector f ∈ H such that: 〈f, ek〉 = αk, for every k ≥ 1.
Proof. The vector f =∑χ
k=1 αkek verifies the requests and from the Theorem 3.3d) is unique. �
Theorem 3.4. Two any bases of separable Hilbert space H have the same cardinalnumber.
Def. 3.2. The cardinal number of a base of a Hilbert space H is said the DIMENSION OF H.
40 2. GEOMETRY OF THE HILBERT SPACES
IRn and Cn have dimension n.l2 and L2[a, b] have dimension χ0.
4. Isomorphisms
We will prove that every infinitedimensional and separable Hilbert space is acopy of l2.
Theorem 4.1. If two separable Hilbert spaces, H and H ′, have the same (finiteor infinite) dimension, then there exists a bijective application:
U : H → H ′, f 7→ Uf
such that for every f, g ∈ H and ∀ λ ∈ C, it has:
a) U(f + g) = Uf + Ugb) U(λf) = λUfc) 〈Uf, Ug〉 = 〈f, g〉 (Note: =⇒ ‖Uf‖ = ‖f‖)
Proof. {ek}χk=1 and {e′k}χk=1 bases for respectively H and H ′ (χ ≤ ∞). For every
f ∈ H we define:
f ′ = Uf = Σχk=1 〈f, ek〉 e
′k
This definition is significative since from the Parseval identity
Σχk=1 〈f, ek〉 
2 = ‖f‖2 <∞and by virtue of the Theorem 2.3 the series Σχ
k=1 〈f, ek〉 e′k converges in H ′. Moreover: Uek = e′k, ∀ k ≥ 1.
Prove that U verifies a), b) and c) and that is a bijection. �
Intuitively, the Theorem 4.1 says that we can identify, through U , the elements ofH and H ′ such that anyone of these spaces appears (algebraically and topologically)a perfect copy of the other one.
Def. 4.1. An APPLICATION A : D → H ′, where D is a linear manifold of H,and H and H ′ are two Hilbert Kspaces is said LINEAR if:
a) A(f + g) = Af + Ag, ∀ f, g ∈ Db) A(λf) = λAf, ∀ f ∈ D, ∀ λ ∈ K.
Def. 4.2. An APPLICATION A : D → H ′, where D is a linear manifold of H,and H and H ′ are two Hilbert Kspaces is said ISOMETRY if:
c) 〈Af,Ag〉 = 〈f, g〉 , ∀ f, g ∈ D.
Def. 4.3. An isometric linear and surjective application A : H → H ′, where Hand H ′ are Hilbert Kspace, is said an ISOMORPHISM OF H ON H ′.
An isomorphism of H on itself is said an AUTOMORPHISM.
4. ISOMORPHISMS 41
Remark 4.1. A linear isometry is automatically injective 3. So an isomorphismis bijective.
If there exists an isomorphism of H on H ′, then there exists an isomorphism ofH ′ on H.
Def. 4.4. Two Hilbert spaces H and H ′ are said isomorphic if there exists anisomorphism of a space on the other one.
Corollary 4.1. Two separable Hilbert spaces are isomorphic ⇐⇒ they have thesame dimension.
Proof. The sufficient part follows from the Theorem 4.1.The converse, let U : H → H ′ be an isomorphism and let {ek}χk=1 be a base for
H. Then {Uek}χk=1 is an orthonormal family of H ′ and hence the dimension of H ′
is at least χ. Repeating the reasoning in the other way, we conclude that H and H ′
must have the same dimension. �
Corollary 4.2. Every infinitely dimensional and separable Hilbert space is isomorphic to l2.
3f, g ∈ H, f 6= g =⇒ 〈A(f − g), A(f − g)〉 = 〈f − g, f − g〉 6= 0 =⇒ A(f − g) 6= 0 =⇒0 6= A(f − g) = Af −Ag.
CHAPTER 3
BOUNDED AND LINEAR OPERATORS
1. Bounded and linear applications (operators)
We will indicate by H and H ′ two Hilbert spaces and by D ⊂ H, D 6= {0}, alinear manifold.
Def. 1.1. An APPLICATION (OPERATOR) A : D → H ′ is said LINEAR if:
a) A(f + g) = Af + Ag, ∀ f, g ∈ Db) A(λf) = λAf, ∀ f ∈ D, ∀ λ ∈ K.
Def. 1.2. An APPLICATION (OPERATOR) A : D → H ′ is said BOUNDEDif there exists a real number k ≥ 0 such that:
‖Af‖ ≤ k‖f‖, ∀ f ∈ D
Def. 1.3. Let define NORM of a bounded application A : D → H ′ the nonnegativereal number ‖A‖:
‖A‖ = supf∈D,f 6=0
‖Af‖‖f‖
Remark 1.1. The existence of the norm of a bounded application A : D → H ′ isassured by the fact that every superiorly bounded nonempty subset of real numbersadmits an upper bound in IR.
Remark 1.2. Strictly we could define the norm of an application A : D → H ′,even if A wasn’t bounded. In this case we should have:
+∞ = supf∈D,f 6=0
‖Af‖‖f‖
= 1 supf∈D,‖f‖=1
‖Af‖
but we don’t interesting this case.
Theorem 1.1. If A : D → H ′ is a linear application, we have:
‖Af‖ ≤ ‖A‖‖f‖, ∀ f ∈ D1In these passages we make use of the linearity
43
44 3. BOUNDED AND LINEAR OPERATORS
and
‖A‖ = inf{k ∈ IR+0 : ‖Af‖ ≤ k‖f‖, ∀ f ∈ D}.
Remark 1.3. The norm of a bounded application A : D → H ′ is zero ⇐⇒Af = 0′ ∈ H ′, ∀ f ∈ D.
Theorem 1.2. If A : D → H ′ is a bounded and linear application, we have:
‖A‖ = supf∈D,f 6=0
‖Af‖‖f‖
= supf∈D,f 6=0
∣∣∣∣∣∣∣∣ Af‖f‖∣∣∣∣∣∣∣∣ ≤ 1 sup
f∈D,‖f‖=1
‖Af‖ ≤ supf∈D,‖f‖≤1
‖Af‖ ≤
≤ 1 supf∈D,0<‖f‖≤1
‖Af‖‖f‖
≤ ‖A‖
Example 1.1. Let U : H → H ′ be an isomophism. Since ‖Uf‖ = ‖f‖, ∀ f ∈ Hit follows that: the linear application U is bounded and ‖U‖ = 1.
Example 1.2. Let M be a subspace of H and let PM : H →M be the projectionon M . PM is a bounded and linear application and:
‖PM‖ =
{0 if M = {0}1 if M 6= {0}.
In fact (cfr. Theorem 2.9):
(7) f = PMf + PM⊥f, PMf ∈M, PM⊥f ∈M⊥, for every f ∈ H.
From the equations:
f + g = (PMf + PMg) + (PM⊥f + PM⊥g), PMf, PMg ∈M, PM⊥f, PM⊥g ∈M⊥
λf = λPMf + λPM⊥f, λPMf ∈M, λPM⊥f ∈M⊥,
and fron the uniqueness (cfr. Corollary 2.1) of this decomposition we concludethat PM is linear. From (7) it results:
‖f‖2 = ‖PMf‖2 + ‖PM⊥f‖2 ≥ ‖PMf‖2.
So PM is bounded and ‖PM‖ ≤ 1.For M = {0} obviously ‖PM‖ = 0.Else for every f ∈M \ {0} we have PMf = f and so ‖PM‖ = 1.
1In these passages we make use of the linearity.
1. BOUNDED AND LINEAR APPLICATIONS (OPERATORS) 45
Example 1.3. Let D be the linear manifold of all finite sequences of l2 and letB : D → l2 so defined:
B(αk)∞k=1 = (kαk)
∞k=1.
Obviously B is linear. Indicating by {ek}∞k=1 the standard base of l2 we have:
‖Bek‖ = k‖ek‖ = k and limk→∞‖Bek‖ =∞.
So B is boundless.
Def. 1.4. A LINEAR APPLICATION A : D → H ′ is CONTINUOUS if forevery f0 ∈ D and for every ε > 0 ∃ a δ > 0 such that:
‖Af0 − Af‖ < ε for every f ∈ D with ‖f − f0‖ < δ.
Theorem 1.3. A (linear) application A : D → H ′ is continuous⇐⇒ for every sequence (fn)∞n=1 ⊂ D convergent to any limit f0 ∈ D it results Af0 = A(limn→∞ fn) =limn→∞Afn.
Proof. =⇒: We fix f0 ∈ D and ε > 0. We take δ > 0 as in the Def. 1.4 and wesuppose:
‖f0 − fn‖ < δ for every n ≥ n(δ).
Then ‖Af0 − Afn‖ < ε for every n ≥ n(δ). Therefore Af0 = limn→∞Afn.⇐=: If A isn’t continuous then for any f0 ∈ D and any ε > 0 there exists a
sequence (fn)∞n=1 such that ‖f0 − fn‖ < 1n
and ‖Af0 − Afn‖ ≥ ε for every n ≥ 1.Therefore limn→∞ fn = f0 but (Afn)∞n=1 doesn’t converge to Af0 and this is absurd.�
We note that, in the Def. 1.4, in the Theorem 1.3 the linearity of A isn’t used. Wework with linear applications, because we have interested to the properties of theseapplications.
Theorem 1.4. If a linear application A : D → H ′ is continuous in a f0 ∈ D =⇒it is continuous in D.
Proof. Anyhow the application is continuous in f0. Then for every ε > 0 ∃ a δ >0 such that:
‖Af0 − Af‖ < ε for every f ∈ D with ‖f − f0‖ < δ.
Let g be an other point of D. If ‖f − g‖ < δ we have ‖(f + f0 − g)− f0‖ < δ so‖Af0 − A(f + f0 − g)‖ < ε, that is from the linearity of A, ‖Ag − Af‖ < ε. �
Theorem 1.5. A linear application A : D → H ′ is bounded ⇐⇒ it is continuous.
46 3. BOUNDED AND LINEAR OPERATORS
Proof. =⇒: If ‖A‖ = 0 the assertion is banal. We suppose so ‖A‖ > 0. Wefix f0 ∈ D and ε > 0. Then for every f ∈ D with ‖f0 − f‖ < ε
‖A‖ we have
‖Af0 − Af‖ ≤ ‖A‖‖f0 − f‖ < ε.⇐=: If A is boundless there exists a sequence (fn)∞n=1 ⊂ D of unity vectors (cfr.
Remark 1.2) such that ‖Afn‖ ≥ n2 for every n ≥ 1. So the sequence (fnn
)∞n=1 converges
to 0 but ‖Afnn‖ ≥ n for every n ≥ 1. From the Theorem 1.3 the application A can’t
be continuous and this is absurd. �
Theorem 1.6. A bounded and linear application A : D → H ′ can be extendedunivocally to a bounded and linear application:
A : D → H ′, con ‖A‖ = ‖A‖.
Proof. D is a subspace. For the first thing we assure that if there exists a suchextension A, it must be unique.
In fact if f ∈ D is assigned, there exists a sequence (fn)∞n=1 ⊂ D such thatf = limn→∞ fn. Since A must be continuous (cfr. Theorem 1.5) this implies:
(8) Af = limn→∞
Afn = limn→∞
Afn.
So we will use (8) as definition of A : D → H ′.Prove that the definition is well placed and that doesn’t depend from the par
ticular sequence convergent to f ∈ D; that is bounded and linear and ‖A‖ = ‖A‖.�
We will suppose that let H be a Hilbert space on C and we will take as norm onC the absolute value.
Def. 1.5. A linear application φ : H → C is said LINEAR FUNCTIONAL ONH.
Example 1.4. Let h be a vector ofH. For every f ∈ H we define φf = 〈f, h〉 ∈ C.Prove that φ is a bounded and linear functional on H and ‖φ‖ = ‖h‖.
Theorem 1.7 (Riesz representation theorem). If φ is a bounded and linear functional on H, then there exists one unique vector h ∈ H such that φf = 〈f, h〉 forevery f ∈ H and so ‖φ‖ = ‖h‖.
Proof. Let M = {g ∈ H : φg = 0}. Prove that M is a subspace of H. If M = H,then the vector 0 ∈ H verifies the thesis of the theorem.
If M 6= H, then there must exist one unity vector e ⊥ M (cfr. Corollary 2.2).Therefore φe 6= 0 (e 6∈M) and h = φe e ⊥M . For every vector f ∈ H it results:
f =
(f − φf
φe2h
)+
φf
φe2h.
2. LINEAR OPERATORS 47
From:
φ
(f − φf
φe2h
)= φf − φf
φe2φe φe = 0
we have:
f1 = f − φf
φe2h = PMf ∈M, f2 =
φf
φe2h = PM⊥f ∈M⊥
and
〈f, h〉 = 〈f1, h〉+ 〈f2, h〉 = 0 + φf.
Prove that ‖φ‖ = ‖h‖ and that the vector h is unique. �
2. Linear operators
Def. 2.1. An application A : D → H, D ⊂ H, is said OPERATOR IN H.An application A : H → H is said OPERATOR ON H.
We will be interested to the linear operators. Examples of bounded operatorsare given by the automorphisms and by the projections on subspaces. A specialautomorphism is the identity operator, I, defined by If = f , ∀ f ∈ H. A specialprojection is the zero operator, 0.
Example 2.1. Let A : Cn → Cn be a linear operator and let {ek}nk=1 be a baseof Cn. The action of A on the vectors ek defines the behaviour of A on the wholeCn. In fact we suppose
Aeh =n∑k=1
αk,hek, 1 ≤ h ≤ n (αk,h = 〈Aeh, ek〉).
So for every vector∑n
h=1 βheh ∈ H, we obtain:
(9) A(n∑h=1
βheh) =n∑h=1
βhAeh =n∑h=1
βh(n∑k=1
αk,hek) =n∑k=1
(n∑h=1
αk,hβh)ek.
The behaviour of A on Cn is so described by the matrix:
(10) A′ =
α1,1 . . . α1,n
. . . . . . . . . . . . . .αn,1 . . . αn,n
.
The converse, if a matrix A′ is assigned as in (10), then the corresponding operatorA defined on Cn by (9) is bounded and linear. In fact from:
48 3. BOUNDED AND LINEAR OPERATORS
‖A(n∑h=1
βheh)‖2 =n∑k=1
n∑h=1
αk,hβh2 ≤ (n∑k=1
n∑h=1
αk,h2n∑h=1
βh2) =
= (n∑k=1
n∑h=1
αk,h2) ‖n∑h=1
βheh‖2
it follows that:
(11) ‖A‖ ≤ (n∑k=1
n∑h=1
αk,h2)12 .
Prove that A is linear.
Theorem 2.1. Let A and B two linear operators in H with respectively linearmanifolds DA and DB. Then the applications A+B, λA, and AB defined by:
(A+B)f = Af +Bf, ∀ f ∈ DA ∩DB
(λA)f = λ(Af), ∀ f ∈ DA
(AB)f = A(Bf), ∀ f ∈ DAB = {f ∈ DB : Bf ∈ DA}are linear operators in H in the corresponding linear manifolds.
Proof. For exercise. �
Def. 2.2. Two OPERATORS A and B in H with respectively domain DA andDB are EQUAL if DA = DB and Af = Bf , ∀ f ∈ DA = DB.
Remark 2.1. Thus from the example 2.1 it generally follows AB 6= BA; in factDAB can be different from DBA.
Corollary 2.1. With the operations defined in the Theorem 2.1 the set of alllinear operators on H is linear space verifying the following properties:
a) (AB)C = A(BC)b) A(B + C) = AB + AC, (A+B)C = AC +BCc) (αA)B = A(αB) = α(AB)d) IA = AI = Ae) 0A = A0 = 0
Proof. For exercise. �
2. LINEAR OPERATORS 49
Theorem 2.2. Let A and B be two operators on bounded and linear H. ThenA+B, λA and AB verify the following inequalities:
(12) ‖A+B‖ ≤ ‖A‖+ ‖B‖
‖λA‖ = λ‖A‖
(13) ‖AB‖ ≤ ‖A‖‖B‖
Proof. ‖A+B‖ = sup‖f‖=1‖Af+Bf‖ ≤ sup‖f‖=1(‖Af‖+‖Bf‖) ≤ sup‖f‖=1‖Af‖+ sup‖f‖=1‖Bf‖ = ‖A‖+ ‖B‖.‖λA‖ = sup‖f‖=1‖λAf‖ = sup‖f‖=1 λ‖Af‖ = λ sup‖f‖=1‖Af‖ = λ‖A‖.‖(AB)f‖ = ‖A(Bf)‖ ≤ ‖A‖‖Bf‖ ≤ ‖A‖‖B‖‖f‖. �
The corollary of the Theorem 2.1 and the Theorem 2.2 prove that the set B of alloperators on bounded and linear H is a normed linear space.
Algebraically the presence of multiplication in B through its structure describedby the corollary of the Theorem 2.1 becomes B an algebra with unity.
Topologically the inequality (13) implies that this multiplication is continuous,becoming B a normed algebra.
Moreover from the completeness of H it follows that B is complete that is aBanach algebra.
Def. 2.3. A (real or complex) linear space U is said a (real or complex) ALGEBRA if for every (a, b) ∈ U ×U there exists one unique element ab ∈ U , said productof a and b, verifying the following conditions:
a) associative rule: (ab)c = a(bc)b) distributive rule: a(b+ c) = ab+ ac; (a+ b)c = ac+ bcc) (αa)b = a(αb) = α(ab).
The element e ∈ U is said unity if:
d) ea = ae = a
The algebra U is said commutative if:
e) commutative rule: ab = ba.
Remark 2.2. If there exists one unity in U from d) it’s univocally determined.In an algebra U we have: 0a = a0 = 0.In fact: 0a = (a− a)a = a2 − a2 = 0 and from a0 = a(a− a) = a2 − a2 = 0.
Def. 2.4. A normed linear space U which, is also an algebra, is said NORMEDALGEBRA if ∀ a, b ∈ U holds:
‖ab‖ ≤ ‖a‖‖b‖.
50 3. BOUNDED AND LINEAR OPERATORS
A complete normed algebra is said BANACH ALGEBRA.
Theorem 2.3. Under the operations defined in Theorem 2.1 the set B of allbounded and liner operators on H is one unitary Banach algebra.
Proof. From the corollary of the Theorem 2.1 and from the Theorem 2.2 it followsthat B is a normed unitary algebra.
It remains to prove that B is complete.Let (An)∞n=1 be a Cauchy sequence in B, that is:
limm,n→∞
‖Am − An‖ = 0.
For the first consequence we have:
limm,n→∞
 ‖Am‖ − ‖An‖  ≤ limm,n→∞
‖Am − An‖ = 0
and so the sequence (‖An‖)∞n=1 converges. From:
‖Amf − Anf‖ ≤ ‖Am − An‖‖f‖it follows that ∀ f ∈ H the sequence (Anf)∞n=1 is a Cauchy sequence and so it
converges. We define on H an operator A:
Af = limn→∞
Anf, ∀ f ∈ H.
Prove that A is bounded and linear.We prove that A is the limit of the sequence (An)∞n=1 in the sense of the norm
operator.In fact for every ε > 0 we can choose n(ε) > 0 such that:
‖Am − An‖ < ε, ∀ m ≥ n(ε), ∀ n ≥ n(ε).
Thus ∀ f ∈ H we have:
‖Amf − Anf‖ ≤ ‖Am − An‖‖f‖ < ε‖f‖, ∀ m ≥ n(ε), ∀ n ≥ n(ε)
and, taking the limit as m→∞
‖Af − Anf‖ ≤ ε‖f‖, ∀ n ≥ n(ε)
‖A− An‖ ≤ ε, ∀ n ≥ n(ε).
Hence A = limn→∞An and in particular ‖A‖ = limn→∞‖An‖. �
2. LINEAR OPERATORS 51
Theorem 2.4. If (An)∞n=1 and (Bn)∞n=1 are two convergent sequences in B andA = limn→∞An, B = limn→∞Bn, then:
limn→∞
AnBn = AB(= limn→∞
An limn→∞
Bn).
Proof. ‖AnBn − AB‖ = ‖AnBn − AnB + AnB − AB‖ ≤ ‖An(Bn − B)‖ +‖(An − A)B‖ ≤ ‖An‖‖Bn −B‖+ ‖An − A‖‖B‖. �
We note that the assertion of the Theorem 2.4 remains true if B is substitutedwith any normed algebra.
Def. 2.5. A BOUNDED AND LINEAR OPERATOR A : H → H is saidINVERTIBLE if there exists a bounded and linear operator A−1 : H → H, saidinverse of A, such that:
AA−1 = A−1A = I
Remark 2.3. An automorphism of H is invertible. But the converse is false, thatis, it isn’t said that an invertible operator preserves the inner product and so it isn’tsaid that it is an automorphism.
Theorem 2.5. An invertible bounded and linear operator is a bijection. Theinverse is unique.
Proof. For exercise. �
We suppose that H is a separable Hilbert space and that {ek}∞k=1 is a base forH and A : H → H let be a bounded and linear operator on H. Then, as in thedimensional finite case, the action of A on H is always determined by the action ofA on the elements of the assigned base.
In fact from Aeh =∑∞
k=1 αk,hek, h ≥ 1, from the continuity of A, for every vectorf =
∑∞h=1 βheh ∈ H we have
〈Af, ek〉 =
⟨∞∑h=1
βhAeh, ek
⟩=∞∑h=1
〈βhAeh, ek〉 =∞∑h=1
αk,hβh
Af =∞∑k=1
〈Af, ek〉 ek =∞∑k=1
(∞∑h=1
αk,hβh)ek.
The action of A on H can described by the infinite matrix:
(14) A′ =
α1,1 α1,2 α1,3 . . .α2,1 α2,2 α2,3 . . .α3,1 α3,2 α3,3 . . .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .
.
52 3. BOUNDED AND LINEAR OPERATORS
Note that αk,h = 〈Aeh, ek〉, k ≥ 1, h ≥ 1 and∑∞
k=1 αk,h2 = ‖Aeh‖2 ≤ ‖A‖2 <∞,h ≥ 1.
The following examples of bounded and linear operators are very important forthe spectral analysis.
Example 2.2. The linear operator A : l2 → l2 defined by:
Aa = A(αk)∞k=1 = (αk−1)∞k=1 where α0 = 0
or A(α1, α2, . . .) = (0, α1, α2, . . .) is said on the right traslation operator in l2.Obviously 〈Aa,Ab〉 =
∑∞k=1 αk−1βk−1 = 〈a, b〉.
Thus A is an isometry of l2 in itself.In fact A transforms l2 on a proper subspace that is the set M of all sequences to
summable square having the first term equal to zero. So A is certainly noninvertible.Describing the behaviour of A in term of the standard base {ek}∞k=1 of l2 we
obtain:
Aeh = eh+1, h ≥ 1.
Thus the matrix A′ associated to the operator A is in the form:
A′ =
0 0 0 . . .1 0 0 . . .0 1 0 . . .. . . . . . . . . . . .
.
Example 2.3. Let H be a separable Hilbert space with base {ek}∞k=1 and let A′
be an infinite matrix in the form (14) with the additional property that:
α2 =∞∑k=1
∞∑h=1
αk,h2 <∞, α ≥ 0.
In the example 2.2 this condition isn’t verified from the matrix A′.For every vector f =
∑∞h=1 βheh ∈ H we have from the Cauchy inequality in l2
(15)∞∑k=1
∞∑h=1
αk,hβh2 ≤∞∑k=1
(∞∑h=1
αk,h2∞∑h=1
βh2) = α2‖f‖2.
The series∑∞
k=1(∑∞
h=1 αk,hβh)ek converges so in H.It’s therefore possible to define an operator A : H → H so:
A(∞∑h=1
βheh) =∞∑k=1
(∞∑h=1
αk,hβh)ek, for f =∞∑h=1
βheh ∈ H.
The operator A is obviously linear.
3. BILINEAR FORMS 53
From (15) it follows that A is bounded and moreover that:
‖A‖ ≤ α =
(∞∑k=1
∞∑h=1
αk,h2) 1
2
.
3. Bilinear forms
Def. 3.1. A function ϕ : H×H → K is said BILINEAR FORM (or BILINEARFUNCTIONAL) on H if ∀ (f, g) ∈ H ×H,∀ α ∈ K we have:
ϕ(f1 + f2, g) = ϕ(f1, g) + ϕ(f2, g)
ϕ(αf, g) = αϕ(f, g)
ϕ(f, g1 + g2) = ϕ(f, g1) + ϕ(f, g2)
ϕ(f, αg) = αϕ(f, g)
The bilinear form ϕ is said SYMMETRIC if:
ϕ(g, f) = ϕ(f, g),∀ (f, g) ∈ H ×H.The bilinear form ϕ is said BOUNDED if there exists a real number k ≥ 0 such
that:
ϕ(f, g) ≤ k‖f‖‖g‖, ∀ (f, g) ∈ H ×H.If ϕ is bounded, then the nonnegative number:
‖ϕ‖ = supf 6=0,g 6=0
ϕ(f, g)‖f‖‖g‖
is said the NORM of ϕ.
Def. 3.2. If ϕ is a bilinear form on H, then the function ϕ on H defined by:
ϕ(f) = ϕ(f, f)
is said ASSOCIATED QUADRATIC FORM ϕ.The quadratic form ϕ is said REAL if ϕ(f) ∈ IR,∀ f ∈ H.The quadratic form ϕ is said BOUNDED if there exists a real M ≥ 0 such that:
ϕ(f) ≤M‖f‖2,∀ f ∈ H.If ϕ is bounded, then the nonnegative number:
54 3. BOUNDED AND LINEAR OPERATORS
‖ϕ‖ = supf 6=0
ϕ(f)‖f‖2
is said NORM of ϕ.
Remark 3.1. For a bilinear form ϕ and for a quadratic form ϕ on H we haverespectively:
‖ϕ‖ = supf 6=0,g 6=0
ϕ(f, g)‖f‖‖g‖
= supf 6=0,g 6=0
∣∣∣∣ϕ( f
‖f‖,g
‖g‖
)∣∣∣∣ = sup‖f‖=‖g‖=1
ϕ(f, g)
‖ϕ‖ = supf 6=0
ϕ(f)‖f‖2
= supf 6=0
∣∣∣∣ϕ( f
‖f‖
)∣∣∣∣ = sup‖f‖=1
ϕ(f)
So:ϕ is bounded ⇐⇒ the terms of the first chain of equalities are finite.ϕ is bounded ⇐⇒ the terms of the second chain of equalities are finite.
Anyone bilinear form ϕ determines a quadratic form ϕ associated to it.But not only ϕ is determined by ϕ, but it’s possible to get back from ϕ the original
bilinear form ϕ.
Theorem 3.1 (Polar identity).
ϕ(f, g) =1
4[ϕ(f + g)− ϕ(f − g) + iϕ(f + ig)− iϕ(f − ig)]
Proof. In fact:
ϕ(f + g) = ϕ(f + g, f + g) = ϕ(f, f) + ϕ(f, g) + ϕ(g, f) + ϕ(g, g)
ϕ(f − g) = ϕ(f − g, f − g) = ϕ(f, f)− ϕ(f, g)− ϕ(g, f) + ϕ(g, g).
Subtracting the second equality from the first, we have:
(16) ϕ(f + g)− ϕ(f − g) = 2ϕ(f, g) + 2ϕ(g, f).
Substituting g with ig we obtain:
(17) ϕ(f + ig)− ϕ(f − ig) = −2iϕ(f, g) + 2iϕ(g, f).
Multiplying (17) for i and summing to (16):
ϕ(f + g)− ϕ(f − g) + iϕ(f + ig)− iϕ(f − ig) = 4ϕ(f, g).
�
Corollary 3.1. If ϕ and φ are bilinear form on H and if ϕ = φ, then ϕ = φ.
3. BILINEAR FORMS 55
Corollary 3.2. A linear operator A on H is isometric ⇐⇒ ‖Af‖ = ‖f‖,∀ f ∈ H.
Theorem 3.2. ϕ is symmetric ⇐⇒ ϕ is real.
Proof. If ϕ is symmetric, then ϕ(f) = ϕ(f, f) = ϕ(f, f) = ϕ(f).The converse, we suppose that ϕ is real.From the polar identity and from the following one:
ϕ(f) = ϕ(−f) = ϕ(if)
we deduct:
ϕ(g, f) =1
4[ϕ(g + f)− ϕ(g − f) + iϕ(g + if)− iϕ(g − if)] =
=1
4[ϕ(f + g)− ϕ(f − g) + iϕ(f − ig)− iϕ(f + ig)] = 1 ϕ(f, g).
�
Theorem 3.3. ϕ is bounded ⇐⇒ ϕ is bounded.If ϕ is bounded, then: ‖ϕ‖ ≤ ‖ϕ‖ ≤ 2‖ϕ‖
Proof. We suppose bounded ϕ. Then:
sup‖f‖=1
ϕ(f) = sup‖f‖=1
ϕ(f, f) ≤ sup‖f‖=‖g‖=1
ϕ(f, g) = ‖ϕ‖.
Therefore: ‖ϕ‖ ≤ ‖ϕ‖.The converse, we suppose bounded ϕ. Then it follows, for every f, g ∈ H with
‖f‖ = ‖g‖ = 1:
ϕ(f, g) ≤ 1
4‖ϕ‖(‖f + g‖2 + ‖f − g‖2 + ‖f + ig‖2 + ‖f − ig‖2) =
=1
4‖ϕ‖2(‖f‖2 + ‖g‖2 + ‖f‖2 + ‖g‖2) = ‖ϕ‖(‖f‖2 + ‖g‖2).
Therefore: ‖ϕ‖ ≤ 2‖ϕ‖. �
Theorem 3.4. If ϕ is bounded and symmetric, then ‖ϕ‖ = ‖ϕ‖.
Proof. From the Theorem 3.2 it results real ϕ and for the Theorem 3.3 we have:‖ϕ‖ ≤ ‖ϕ‖. Therefore we prove that: ϕ(f, g) ≤ ‖ϕ‖ for every f, g ∈ H with‖f‖ = ‖g‖ = 1.
Suppose that ϕ(f, g) = ρeiα. We denote by f ′ unitary vector e−iαf ; then
ϕ(f, g) = ρ = ϕ(e−iαf, g) = ϕ(f ′, g) =1
4[ϕ(f ′ + g)− ϕ(f ′ − g)]
1Here we use the reality of ϕ.
56 3. BOUNDED AND LINEAR OPERATORS
since ϕ(f ′, g) is real it follows that the purely imaginary terms in the polar identityit must be zero. Therefore:
ϕ(f, g) ≤ 1
4‖ϕ‖(‖f ′ + g‖2 + ‖f ′ − g‖2) =
1
2‖ϕ‖(‖f ′‖2 + ‖g‖2) = ‖ϕ‖.
�
Theorem 3.5. a) Let A : H → H a bounded and linear operator. Then thefunction ϕ : H ×H → K defined by:
ϕ(f, g) = 〈f, Ag〉is a bounded bilinear on H and ‖ϕ‖ = ‖A‖.b) The converse, let ϕ : H × H → K be a bounded bilinear form. Then there
exists one unique bounded and linear operator A : H → H, such that:
ϕ(f, g) = 〈f, Ag〉 , ∀ (f, g) ∈ H ×H.
Corollary 3.3. a) Let A : H → H be a bounded and linear operator. Then thefunction φ : H ×H → K defined by:
φ(f, g) = 〈Af, g〉is a bounded bilinear form on H and ‖φ‖ = ‖A‖.b) The converse, let φ : H × H → K be a bounded bilinear form. Then there
exists one unique bounded and linear operator A : H → H, such that:
φ(f, g) = 〈Af, g〉 , ∀ (f, g) ∈ H ×H.
Corollary 3.4. If A : H → H is a bounded and linear operator, then:
‖A‖ = sup‖f‖=‖g‖=1
 〈f, Ag〉  = sup‖f‖=‖g‖=1
 〈Af, g〉 .
4. Added operators
Not all the bounded and linear operators A : H → H have an inverse, but theyhave always an operator A∗ : H → H, such that: 〈Af, g〉 = 〈f, A∗g〉.
In fact:
Theorem 4.1. Let A : H → H be a bounded and linear operator. Then thereexists one unique bounded and linear operator A∗ : H → H said the added of A, suchthat:
〈Af, g〉 = 〈f, A∗g〉 , ∀ f ∈ H, ∀ g ∈ H.
4. ADDED OPERATORS 57
Moreover: ‖A∗‖ = ‖A‖.
Proof. From the Corollary 3.3 the equation ϕ(f, g) = 〈Af, g〉 defines a boundedbilinear form ϕ. From the Theorem 3.5 b) there exists one unique bounded and linearoperator A∗ : H → H, such that:
ϕ(f, g) = 〈f, A∗g〉 , ∀ f ∈ H, ∀ g ∈ H.Moreover, from the same theorems:
‖A‖ = ‖ϕ‖ = ‖A∗‖.�
Def. 4.1. It’s called ADDED OPERATOR of a bounded and linear operatorA : H → H, that bounded and linear operator A∗ : H → H, such that:
(18) 〈Af, g〉 = 〈f, A∗g〉 , ∀ f ∈ H, ∀ g ∈ H.
Example 4.1. Let H = Cn and let {ek}nk=1 be a base of Cn. Let A : Cn → Cn
be a linear operator given by the matrix A′ = (αk,h)nk,h=1.
If b =∑n
k=1 βkek, c =∑n
k=1 γkek ∈ Cn then:
〈Ab, c〉 =
⟨n∑k=1
(n∑h=1
αk,hβh)ek,n∑k=1
γkek
⟩=
n∑k=1
n∑h=1
γkαk,hβh.
We can imagine to have obtained this number multiplying the matrix A′ on theright for the column vector of components βh and on the left for the row vector ofcomponents γk. Let A∗
′= (α∗k,h)
nk,h=1 be the matrix of elements α∗k,h = αh,k (A∗
′is
the complex conjugate transposed of A′ and it’s said added matrix of A′). Then:
〈b, A∗c〉 =
⟨n∑k=1
βkek,n∑k=1
(n∑h=1
α∗k,hγh)ek
⟩=
n∑k=1
n∑h=1
βkα∗k,hγh =
=n∑k=1
n∑h=1
γhαh,kβk = 〈Ab, c〉 .
The operator A∗ corresponding to the matrix A∗′
is so the added of A and theadded operators correspond to the added matrixes.
Example 4.2. Let A : l2 → l2 be the right traslation operator defined by
A(α1, α2, α3, . . .) = (0, α1, α2, . . .).
It follows that the added operator A∗ : l2 → l2 is the left traslation operator ofl2, given by: A∗(β1, β2, β3, . . .) = (β2, β3, β4, . . .).
In fact for a = (αk)∞k=1 and b = (βk)
∞k=1 di l2, we have:
58 3. BOUNDED AND LINEAR OPERATORS
〈Aa, b〉 =∞∑k=1
αkβk+1 = 〈a,A∗b〉 .
With respect to the standard base {ek}∞k=1 of l2, the action of A∗ is described by:
A∗ek = ek−1, k ≥ 2, A∗e1 = 0.
The corresponding matrix A∗′
is:
A∗′
=
0 1 0 . . .0 0 1 . . .0 0 0 . . .. . . . . . . . . . . .
.
We remark that:
(19) A∗A = I 6= AA∗
and that AA∗ always nulls the first term of every sequence of l2. While A isisometric, A∗ isn’t certainly isometric, being A∗e1 = 0.
Moreover: ‖A‖ = ‖A∗‖ = 1
Theorem 4.2. Let A,B : H → H be two bounded and linear operators. Then:
a) A∗∗ = Ab) (λA)∗ = λA∗
c) (AB)∗ = B∗A∗
d) (A+B)∗ = A∗ +B∗
e) If A is invertible, also A∗ is invertible and: (A∗)−1 = (A−1)∗.
Proof. For exercise. �
Theorem 4.3. If A : H → H is bounded and linear, then:
‖A∗A‖ = ‖A‖2 = ‖AA∗‖.
Proof. From ‖Af‖2 = 〈Af,Af〉 = 〈f, A∗Af〉 ≤ ‖A∗A‖‖f‖2, we have: ‖Af‖ ≤‖A∗A‖ 1
2‖f‖, thus ‖A‖2 ≤ ‖A∗A‖.Moreover ‖A∗A‖ ≤ ‖A∗‖‖A‖ = ‖A‖2. �
Def. 4.2. A bounded and linear operator A : H → H is said:AUTOADDED (or HERMITIAN) if A∗ = AUNITARY if A∗ = A−1
NORMAL if AA∗ = A∗A.
Remark 4.1. Obviously the bounded, linear and added operators, as those onesunitary are normal.
4. ADDED OPERATORS 59
Remark 4.2. The right traslation operator A : l2 → l2 (cfr. example 4.2) isn’tnormal from (19). For the same reason the left traslation operator isn’t normal, too.
Remark 4.3. The operator 2iI of any Hilbert space is normal, but it isn’t neither autoadded, nor unitary. In fact we have: (2iI)∗ = −2iI and (2iI)(2iI)∗ =(2iI)∗(2iI) = 4I.
Remark 4.4. From what we have seen in the example 4.1 it follows that everylinear operator A : Cn → Cn is autoadded if and only if, with respect to a givenbase, the matrix A′ is autoadded (or hermitian) in the usual sense, that is αk,h = αh,k.Moreover A is unitary if and only if the corresponding matrix A′ is unitary in thetraditional sense, that is, if and only if the added matrix coincides with the inversematrix of A′ (a real unitary matrix is usually said orthogonal).
Now, we give some characterizations for the bounded and linear operators whichare autoadded, unitary and normal.
At first, we remark that from the definition 4.2, it follows that a bounded andlinear operator A : H → H is:
autoadded ⇐⇒ 〈Af, g〉 = 〈f, Ag〉, ∀ f, g ∈ Hunitary ⇐⇒ A is invertible and: (*) 〈Af, g〉 = 〈f, A−1g〉, ∀ f, g ∈ H.
Theorem 4.4. A bounded and linear operator on H is unitary ⇐⇒ it is unautomorphism.
Proof. =⇒: If A ∈ B is unitary then since B is a Banach algebra it follows:〈Af,Ag〉 = 〈f, A−1Ag〉 = 〈f, g〉 and so A is an automorphism.⇐=: For exercise. �
Remark 4.5. If A is an injective linear operator verifying (*) then A is an automorphism and so it’s bounded.
Theorem 4.5. Let A : H → H be a bounded and linear operator. The followingaffirmations are equivalent:
a) A is autoaddedb) The bilinear form ϕ defined by ϕ(f, g) = 〈Af, g〉 is symmetricc) The quadratic form ϕ defined by ϕ(f) = 〈Af, f〉 is real.
Proof. For exercise. �
Corollary 4.1. If A : H → H is an autoadded, bounded and linear operatorthen:
‖A‖ = sup‖f‖=1
 〈Af, f〉 .
Proof. Defining ϕ(f, g) = 〈Af, g〉 as in the proof of the Theorem 4.1 we obtainfrom the Corollary 3.3 a) and from the Theorema 3.4:
60 3. BOUNDED AND LINEAR OPERATORS
‖A‖ = ‖ϕ‖ = ‖ϕ‖ = sup‖f‖=1
ϕ(f) = sup‖f‖=1
 〈Af, f〉 .
�
Theorem 4.6. A bounded and linear operator A : H → H is normal⇐⇒ ‖Af‖ =‖A∗f‖, ∀ f ∈ H.
Proof. We define the bilinear forms ϕ and φ on H by:
ϕ(f, g) = 〈A∗Af, g〉 , φ(f, g) = 〈AA∗f, g〉 .We obtain:
ϕ(f) = 〈A∗Af, f〉 = 〈Af,Af〉 = ‖Af‖2,
φ(f) = 〈AA∗f, f〉 = 〈A∗f, A∗f〉 = ‖A∗f‖2.
From the Corollary 3.1 we have that ϕ = φ ⇐⇒ ϕ = φ that is ⇐⇒ 〈A∗Af, g〉 =〈AA∗f, g〉 , ∀ f, g ∈ H and this one ⇐⇒ A∗A = AA∗. �
Theorem 4.7. Let A : H → H be a bounded and linear operator. Then thereexist two autoadded, bounded and linear operators B,C : H → H, such that:
A = B + iC
B and C are determined in one unique way.The operator A is normal ⇐⇒ BC = CB.
Proof. We suppose that there exist two autoadded, bounded and linear operatorsB and C on H, as in the theorem. Then:
(20)A = B + iC , A+ A∗ = 2BA∗ = B∗ − iC∗ = B − iC , A− A∗ = 2iC
So we necessarily give the formulas:
(21) B = 12(A+ A∗) , C = 1
2i(A− A∗)
The converse, for every bounded and linear operator on H, the bounded andlinear operators, defined by (21), are autoadded and they verify (20).
If A is normal, then we have:
BC =1
4i(A2 + A∗A− AA∗ − A∗2) =
1
4i(A2 − A∗2) = CB.
The converse, from BC = CB it follows:
AA∗ = (B + iC)(B − iC) = (B − iC)(B + iC) = A∗A.
�
5. PROJECTION OPERATORS 61
5. Projection operators
Def. 5.1. To every subspace M ⊂ H it corresponds a projection operator P =PM : H → M that we will briefly call the PROJECTION ONTO M (cfr. Theorem 2.6, Chapter 2).
Theorem 5.1. If P is a projection onto a subspace M , then:
M = {f : Pf = f} = {f : ‖Pf‖ = ‖f‖} = {Pg : g ∈ H}.
Proof. It’s already known that Pg ∈ M, ∀ g ∈ H, and Pf = f, ∀ f ∈ M andso that M ⊂ {f : Pf = f} ⊂ {Pg : g ∈ H} = M moreover it’s M ⊂ {f : Pf =f} ⊂ {f : ‖Pf‖ = ‖f‖}. So M is contained in all the abovementioned sets and itcoincides with the last one because from the definition {Pg : g ∈ H} is constitutedby elements of M . On the other hand, for f 6∈ M we have f = Pf + P⊥f (cfr.Theorem 2.9, Chapter 2) (where P⊥ denotes the projection onto M⊥) and P⊥f 6= 0.So:
‖f‖2 = ‖Pf‖2 + ‖P⊥f‖2 > ‖Pf‖2
and therefore M ⊃ {f : ‖Pf‖ = ‖f‖}. �
Remark 5.1. It’s reasonable to denote the set {Pg : g ∈ H} with PH. Ingeneral, if A is a linear operator in H with domain DA, then for every U ⊂ DA wewill write AU = {Af : f ∈ U}. In particular, the set ADA denotes the rank ofA. In accordance with the Theorem 5.1, the rank of a projection coincides with thecorresponding subspace.
Example 5.1. Let ]c, d[ be a subinterval of ]a, b[⊂ IR and let
M = {f ∈ L2[a, b] : f(x) = 0 for almost every x ∈]a, b[\]c, d[}.Then M is a subspace of L2[a, b] that can be identified by L2[c, d] (cfr. example 1.4
of the §1 and the example 2.1 of the §2 of the Chapter 2). If P is the projection ontoM , then:
Pf(x) =
{f(x) for x ∈]c, d[
0 for x ∈]a, b[\]c, d[.
The passage from f ∈ L2[a, b] to Pf can be realized multiplying the function ffor the characteristic function of ]c, d[, χ]c,d[, defined by Pf = χ]c,d[f, ∀ f ∈ L2[a, b].
The assertion of the Theorem 5.1 can be formuled in the following way:
M = {f ∈ L2[a, b] : χ]c,d[f = f} = {f ∈ L2[a, b] :
∫ d
c
f(x)2dx =
∫ b
a
f(x)2dx} =
62 3. BOUNDED AND LINEAR OPERATORS
= {χ]c,d[g : g ∈ L2[a, b]}.
Theorem 5.2. A bounded and linear operator P on H is a projection ⇐⇒ P =P ∗ = P 2.
Proof. We suppose that P is the projection onto M and P⊥ that one onto M⊥.Then since Pf ∈M we have P 2f = P (Pf) = Pf and since⟨
Pf, P⊥g⟩
=⟨P⊥f, Pg
⟩= 0
we have
〈Pf, g〉 = 1⟨Pf, Pg + P⊥g
⟩= 〈Pf, Pg〉 =
⟨Pf + P⊥f, Pg
⟩1 = 〈f, Pg〉
and so P 2 = P and P ∗ = P .The converse, let P = P ∗ = P 2. The set M = {f : Pf = f} is obviously a linear
manifold. Prove that M is a subspace.Every vector g ∈ H can be written as g = Pg + (g − Pg) where Pg ∈ M since
P (Pg) = P 2g = Pg. To prove that P is a projection onto M we only must verifythat g − Pg ∈M⊥. In fact if h ∈M it results:
〈g − Pg, h〉 = 〈g − Pg, Ph〉 =⟨Pg − P 2g, h
⟩= 〈Pg − Pg, h〉 = 0.
�
Remark 5.2. A linear operator A : H → H is said IDEMPOTENT if A2 = A.From the Theorem 5.2 it follows that the projection operators are characterized bythe fact to be autoadded and idempotent.
In particular, also the operator 0 and that one I are two projections.
While if B is a Banach algebra, the subset of all the projections doesn’t havean analogous algebraic and topological structure. For example, if P is a differentprojection from 0, then λP is a projection ⇐⇒ λ = 1 or λ = 0 (if λ = 1 we obtainP while if λ = 0 we obtain 0, λ 6∈ {0, 1}, P 6= 0 =⇒ λP isn’t a projection because(λP )2 = λ2P 2 = λ2P 6= λP ).
Theorem 5.3. If P is the projection onto M and P⊥ is the projection onto M⊥,then P⊥ = I − P and M⊥ = {f : Pf = 0}.
Proof. From the Theorem 2.9, Chapter 2, we have P⊥g = g − Pg = (I − P )g.From the Theorem 5.1 we conclude that M⊥ = {f : (I − P )f = f} = {f : Pf = 0}.�
Theorem 5.4. Let P and Q be two projections onto the subspaces M and Nrespectively. The following affirmations are equivalent:
1 cfr. Theorem 2.9, Chapter 2.
5. PROJECTION OPERATORS 63
a) PQ = QPb) PQ is a projectiond) QP is a projection.
Proof. a) ⇒ b): From the Theorem 5.2 PQ is a projection if and only if PQ =(PQ)∗ = (PQ)2. (PQ)∗ = Q∗P ∗ = QP = PQ. (PQ)2 = (PQ)(PQ) = (PQ)(QP ) =P (QQ)P = PQ2P = P (QP ) = P (PQ) = P 2Q = PQ.
b)⇒ a): PQ = (PQ)∗ = Q∗P ∗ = QP .Exchange the rule of PQ, to obtain d). �
Corollary 5.1. If PQ is a projection, then it’s the projection onto M ∩N .
Proof. From the Theorem 5.1 we must prove that M ∩N = PQH. Since PQ =QP from PQH ⊂ PH ⊂ M and PQH = QPH ⊂ QH ⊂ N it follows: PQH ⊂M ∩N .
On the other hand for f ∈M∩N we have f = Qf = PQf and so M∩N ⊂ PQH.�
Theorem 5.5. Let P and Q be two projections onto the subspaces M and N ,respectively. The following affirmations are equivalent:
a) M ⊥ Nb) PN = 0c) QM = 0d) PQ = 0e) QP = 0f) P +Q is a projection.
Proof. a)⇒ b): If M ⊥ N ⇒ N ⊂M⊥ and from the Theorem 5.3 ⇒ PN = 0.a)⇒ c): If M ⊥ N ⇒ M ⊂ N⊥ and from the Theorem 5.3 ⇒ QM = 0.b)⇒ d): (PQ)g = P (Qg) ∈ PN = 0.c)⇒ e): (QP )g = Q(Pg) ∈ QM = 0.e) ⇒ a): ∀ f ∈ M , ∀ g ∈ N we have that 〈f, g〉 = 〈Pf,Qg〉 = 〈QPf, g〉 =
〈0, g〉 = 0.d) ⇒ a): ∀ f ∈ M , ∀ g ∈ N we have that 〈g, f〉 = 〈Qg, Pf〉 = 〈PQg, f〉 =
〈0, f〉 = 0.e), d) ⇒ f): (P + Q)(P + Q) = P 2 + PQ + QP + Q2 = P + Q. (P + Q)∗ =
P ∗ +Q∗ = P +Q.f)⇒ d), e): (P +Q)2 = P 2 +PQ+QP +Q2 = P +PQ+QP +Q = P +Q, from
this one it follows PQ + QP = 0. ∀f ∈ H is (PQ + QP )f = PQf + QPf = 0 andso PQf = −QPf . But PQf ∈ M and −QPf ∈ N and since the two vectors areopposite, they belong to M ∩N . Now, we will prove that M ∩N = 0. Let f ∈M ∩Nthen we have 0 = (PQ+QP )f = P (Qf) +Q(Pf) = Pf +Qf = f + f = 2f and sof = 0. This one implies that PQ = 0 and QP = 0. �
Corollary 5.2. If P +Q is a projection, then it’s the projection onto M +N .
64 3. BOUNDED AND LINEAR OPERATORS
Proof. M + N is a subspace since M ⊥ N . For f ∈ M + N we have f =Pf +Qf = (P +Q)f and conversely this equation implies f ∈M +N . So M +N ={f : (P +Q)f = f}. �
Remark 5.3. By virtue of the equality between a), d) and e) of the previous theorem the projections P and Q are said orthogonal, P ⊥ Q, if PQ = 0 or, equivalently,QP = 0.
If (Pk)nk=1 is a sequence of mutually orthogonal projections onto the subspaces Mk
respectively, then we prove by induction that∑n
k=1 Pk is the projection∑n
k=1Mk.
Theorem 5.6. Let P and Q be two projections onto the subspaces M and Nrespectively. The following affirmations are equivalent:
a) M ⊂ Nb) QP = Pc) PQ = Pd) Q− P is a projectione) 〈(Q− P )f, f〉 ≥ 0, ∀ f ∈ Hf) ‖Pf‖ ≤ ‖Qf‖, ∀ f ∈ H.
Proof. For exercise. �
Corollary 5.3. If Q−P is a projection, then it’s the projection onto N −M =N ∩M⊥.
Proof. Q− P = Q(I − P ) is the projection onto N ∩M⊥ from the Corollary 5.1and the Theorem 5.3. �
Bibliography
[1] D.AVERNA, Analisi Funzionale  Spazi di Hilbert. Course (2007)
65
Index
C(I), 8B
is a normed algebra, 49is an algebra with unity, 48is one unitary Banach algebra, 50
ndimensional euclidean space IRn
IRn have dimension n, 40ndimensional euclidean space, IRn, 8ndimensional unitary space Cn
Cn have dimension n, 40ndimensional unitary space, Cn, 8
ALGEBRA, 49AUTOMORPHISM, 40
BANACH ALGEBRA, 50BANACH SPACE, 17BASE, 37Bessel inequality, 13BILINEAR FORM, 53
BOUNDED, 53NORM, 53
SYMMETRIC, 53BOUNDED AND LINEAR OPERATOR
ADDED OPERATOR, 57AUTOADDED OPERATOR(or
HERMITIAN), 58INVERTIBLE, 51NORMAL OPERATOR, 58UNITARY OPERATOR, 58
BOUNDED APPLICATION (OPERATOR),43
CAUCHY SEQUENCE, 16CauchySchwarzBunjakovskij inequality, 9COMPLETE, 17completion, 19
CONJUGATE, 7CONVERGENT SEQUENCE, 15CONVERGENT SERIES, 15
DIMENSION, 39
EVERYWHERE DENSE, 15
Finite sequences, 8Fourier series, 38
GENERATED SUBSPACE, 29GramSchmidt orthogonalization, 38
HILBERT SPACE, 20
IDEMPOTENT, 62ISOMETRY APPLICATION, 40ISOMORPHISM, 40
Kronecker delta, 12
LINEAR APPLICATION, 40LINEAR APPLICATION (OPERATOR), 43
bounded ⇐⇒ continuous, 45CONTINUOUS, 45NORM, 43
LINEAR FUNCTIONAL, 46LINEAR MANIFOLD, 27LINEARLY INDEPENDENT, 11
MAXIMAL FAMILY, 37MUTUALLY ORTHOGONAL, 31
NORM, 43NORM INDUCED BY THE INNER
PRODUCT, 9NORMED ALGEBRA, 49NORMED LINEAR SPACE, 14
67
68 INDEX
OPERATOR, 47ORTHOGONAL, 11ORTHOGONAL COMPLEMENT, 31ORTHOGONAL TO A SUBSET, 31ORTHONORMAL, 11
Parallelogram law, 10Parseval identity, 38Phytagorean theorem, 12Polar identity, 54PREHILBERTIAN, 7PROJECTION, 33, 61Projection theorem, 35PROPER SUBSPACE, 27
QUADRATIC FORM, 53BOUNDED, 53
NORM, 53REAL, 53
Riesz representation theorem, 46
SEPARABLE, 15SUBSPACE, 27SUM OF THE LINEAR MANIFOLD, 29
The Hilbert space L2, 24L2 have dimension χ0, 40L2[a, b] ⊂ L1[a, b], 25is a complex linear space, 24is a complex prehilbertian space, 24is complete with respect to the norm
induced by the inner product, 25is separable, 25is the completion of the continuous
functions, 25The Hilbert space l2, 20l2 have dimension χ0, 40a prehilbertian space on C, 21is a complex linear space, 20is complete with respect to the norm
induced by the inner product, 21is separable, 22is the completion of all the finite
sequences, 22
UNIFORM NORM, 14UNITY VECTOR, 9