KARL SVOZIL MATHEMATICAL METH- ODS OF THEORETICAL PHYSICS EDITION FUNZL arXiv:1203.4558v4 [math-ph] 25 Mar 2014
K A R L S V O Z I L
M AT H E M AT I C A L M E T H -O D S O F T H E O R E T I C A LP H Y S I C S
E D I T I O N F U N Z L
arX
iv:1
203.
4558
v4 [
mat
h-ph
] 2
5 M
ar 2
014
Copyright © 2014 Karl Svozil
P U B L I S H E D B Y E D I T I O N F U N Z L
First Edition, October 2011
Second Edition, March 2014
Contents
Introduction 15
Part I: Metamathematics and Metaphysics 19
1 Unreasonable effectiveness of mathematics in the natural sciences 21
2 Methodology and proof methods 25
3 Numbers and sets of numbers 31
Part II: Linear vector spaces 33
4 Finite-dimensional vector spaces 35
4.1 Basic definitions 35
4.1.1 Fields of real and complex numbers, 35.—4.1.2 Vectors and vector space, 36.
4.2 Linear independence 37
4.3 Subspace 37
4.3.1 Scalar or inner product, 38.—4.3.2 Hilbert space, 39.
4.4 Basis 40
4.5 Dimension 41
4.6 Coordinates 41
4.7 Finding orthogonal bases from nonorthogonal ones 44
4.8 Dual space 46
4.8.1 Dual basis, 47.—4.8.2 Dual coordinates, 49.—4.8.3 Representation of a func-
tional by inner product, 50.—4.8.4 Double dual space, 51.
4
4.9 Tensor product 52
4.9.1 Definition, 52.—4.9.2 Representation, 52.
4.10 Linear transformation 53
4.10.1 Definition, 53.—4.10.2 Operations, 53.—4.10.3 Linear transformations
as matrices, 54.
4.11 Direct sum 55
4.12 Projector or Projection 55
4.12.1 Definition, 55.—4.12.2 Construction of projectors from unit vectors, 56.
4.13 Change of basis 58
4.14 Mutually unbiased bases 61
4.15 Rank 63
4.16 Determinant 64
4.16.1 Definition, 64.—4.16.2 Properties, 65.
4.17 Trace 65
4.17.1 Definition, 65.—4.17.2 Properties, 66.
4.18 Adjoint 66
4.18.1 Definition, 66.—4.18.2 Properties, 66.—4.18.3 Matrix notation, 67.
4.19 Self-adjoint transformation 67
4.20 Positive transformation 68
4.21 Permutation 68
4.22 Orthonormal (orthogonal) transformations 69
4.23 Unitary transformations and isometries 69
4.23.1 Definition, 69.—4.23.2 Characterization of change of orthonormal basis,
70.—4.23.3 Characterization in terms of orthonormal basis, 70.
4.24 Perpendicular projectors 71
4.25 Proper value or eigenvalue 72
4.25.1 Definition, 72.—4.25.2 Determination, 73.
4.26 Normal transformation 76
4.27 Spectrum 77
4.27.1 Spectral theorem, 77.—4.27.2 Composition of the spectral form, 77.
4.28 Functions of normal transformations 79
4.29 Decomposition of operators 80
4.29.1 Standard decomposition, 80.—4.29.2 Polar representation, 80.—4.29.3 Decomposition
of isometries, 81.—4.29.4 Singular value decomposition, 81.—4.29.5 Schmidt de-
composition of the tensor product of two vectors, 81.
4.30 Commutativity 83
4.31 Measures on closed subspaces 85
4.31.1 Gleason’s theorem, 86.—4.31.2 Kochen-Specker theorem, 86.
5
5 Tensors 89
5.1 Notation 89
5.2 Multilinear form 90
5.3 Covariant tensors 90
5.3.1 Basis transformations, 91.—5.3.2 Transformation of tensor components, 92.
5.4 Contravariant tensors 93
5.4.1 Definition of contravariant basis, 93.—5.4.2 Connection between the trans-
formation of covariant and contravariant entities, 94.
5.5 Orthonormal bases 94
5.6 Invariant tensors and physical motivation 95
5.7 Metric tensor 95
5.7.1 Definition metric, 95.—5.7.2 Construction of a metric from a scalar prod-
uct by metric tensor, 95.—5.7.3 What can the metric tensor do for us?, 96.—5.7.4 Transformation
of the metric tensor, 96.—5.7.5 Examples, 97.
5.8 General tensor 100
5.9 Decomposition of tensors 101
5.10 Form invariance of tensors 101
5.11 The Kronecker symbol δ 107
5.12 The Levi-Civita symbol ε 107
5.13 The nabla, Laplace, and D’Alembert operators 108
5.14 Some tricks and examples 109
5.15 Some common misconceptions 115
5.15.1 Confusion between component representation and “the real thing”, 115.—
5.15.2 A matrix is a tensor, 115.
6 Projective and incidence geometry 117
6.1 Notation 117
6.2 Affine transformations 117
6.2.1 One-dimensional case, 118.
6.3 Similarity transformations 118
6.4 Fundamental theorem of affine geometry 118
6.5 Alexandrov’s theorem 118
7 Group theory 119
7.1 Definition 119
7.2 Lie theory 120
7.2.1 Generators, 120.—7.2.2 Exponential map, 121.—7.2.3 Lie algebra, 121.
6
7.3 Some important groups 121
7.3.1 General linear group GL(n,C), 121.—7.3.2 Orthogonal group O(n), 121.—
7.3.3 Rotation group SO(n), 122.—7.3.4 Unitary group U(n), 122.—7.3.5 Special
unitary group SU(n), 122.—7.3.6 Symmetric group S(n), 122.—7.3.7 Poincaré group,
122.
7.4 Cayley’s representation theorem 123
Part III: Functional analysis 125
8 Brief review of complex analysis 127
8.1 Differentiable, holomorphic (analytic) function 129
8.2 Cauchy-Riemann equations 129
8.3 Definition analytical function 130
8.4 Cauchy’s integral theorem 131
8.5 Cauchy’s integral formula 131
8.6 Series representation of complex differentiable functions 133
8.7 Laurent series 133
8.8 Residue theorem 135
8.9 Multi-valued relationships, branch points and and branch cuts 139
8.10 Riemann surface 139
8.11 Some special functional classes 140
8.11.1 Entire function, 140.—8.11.2 Liouville’s theorem for bounded entire function,
140.—8.11.3 Picard’s theorem, 141.—8.11.4 Meromorphic function, 141.
8.12 Fundamental theorem of algebra 141
9 Brief review of Fourier transforms 1439.0.1 Functional spaces, 143.—9.0.2 Fourier series, 144.—9.0.3 Exponential Fourier
series, 146.—9.0.4 Fourier transformation, 146.
10 Distributions as generalized functions 149
10.1 Heuristically coping with discontinuities 149
10.2 General distribution 151
10.2.1 Duality, 151.—10.2.2 Linearity, 152.—10.2.3 Continuity, 152.
10.3 Test functions 152
10.3.1 Desiderata on test functions, 152.—10.3.2 Test function class I, 153.—10.3.3 Test
function class II, 154.—10.3.4 Test function class III: Tempered distributions and
Fourier transforms, 154.—10.3.5 Test function class IV: C∞, 157.
7
10.4 Derivative of distributions 157
10.5 Fourier transform of distributions 158
10.6 Dirac delta function 158
10.6.1 Delta sequence, 158.—10.6.2 δ[ϕ
]distribution, 159.—10.6.3 Useful for-
mulæ involving δ, 161.—10.6.4 Fourier transform of δ, 164.—10.6.5 Eigenfunction
expansion of δ, 164.—10.6.6 Delta function expansion, 165.
10.7 Cauchy principal value 165
10.7.1 Definition, 165.—10.7.2 Principle value and pole function 1x distribution,
166.
10.8 Absolute value distribution 167
10.9 Logarithm distribution 168
10.9.1 Definition, 168.—10.9.2 Connection with pole function, 168.
10.10Pole function 1xn distribution 169
10.11Pole function 1x±iα distribution 169
10.12Heaviside step function 171
10.12.1 Ambiguities in definition, 171.—10.12.2 Useful formulæ involving H, 172.—
10.12.3 H[ϕ
]distribution, 173.—10.12.4 Regularized regularized Heaviside function,
173.—10.12.5 Fourier transform of Heaviside (unit step) function, 173.
10.13The sign function 175
10.13.1 Definition, 175.—10.13.2 Connection to the Heaviside function, 175.—
10.13.3 Sign sequence, 175.
10.14Absolute value function (or modulus) 175
10.14.1 Definition, 175.—10.14.2 Connection of absolute value with sign and Heav-
iside functions, 176.—10.14.3 Fourier transform of sgn, 176.
10.15Some examples 176
11 Green’s function 183
11.1 Elegant way to solve linear differential equations 183
11.2 Finding Green’s functions by spectral decompositions 185
11.3 Finding Green’s functions by Fourier analysis 188
Part IV: Differential equations 193
12 Sturm-Liouville theory 195
12.1 Sturm-Liouville form 195
12.2 Sturm-Liouville eigenvalue problem 196
12.3 Adjoint and self-adjoint operators 197
12.4 Sturm-Liouville transformation into Liouville normal form 199
8
12.5 Varieties of Sturm-Liouville differential equations 201
13 Separation of variables 203
14 Special functions of mathematical physics 207
14.1 Gamma function 207
14.2 Beta function 210
14.3 Fuchsian differential equations 210
14.3.1 Regular, regular singular, and irregular singular point, 211.—14.3.2 Functional
form of the coefficients in Fuchsian differential equations, 211.—14.3.3 Frobenius
method by power series, 212.—14.3.4 d’Alambert reduction of order, 216.—14.3.5 Computation
of the characteristic exponent, 217.—14.3.6 Behavior at infinity, 219.—14.3.7 Examples,
220.
14.4 Hypergeometric function 226
14.4.1 Definition, 226.—14.4.2 Properties, 229.—14.4.3 Plasticity, 229.—14.4.4 Four
forms, 232.
14.5 Orthogonal polynomials 233
14.6 Legendre polynomials 233
14.6.1 Rodrigues formula, 234.—14.6.2 Generating function, 235.—14.6.3 The
three term and other recursion formulae, 235.—14.6.4 Expansion in Legendre polynomials,
237.
14.7 Associated Legendre polynomial 239
14.8 Spherical harmonics 239
14.9 Solution of the Schrödinger equation for a hydrogen atom 240
14.9.1 Separation of variables Ansatz, 241.—14.9.2 Separation of the radial part
from the angular one, 241.—14.9.3 Separation of the polar angle θ from the az-
imuthal angle ϕ, 242.—14.9.4 Solution of the equation for the azimuthal angle
factorΦ(ϕ), 242.—14.9.5 Solution of the equation for the polar angle factorΘ(θ),
243.—14.9.6 Solution of the equation for radial factor R(r ), 245.—14.9.7 Composition
of the general solution of the Schrödinger Equation, 247.
15 Divergent series 249
15.1 Convergence and divergence 249
15.2 Euler differential equation 250
15.2.1 Borel’s resummation method – “The Master forbids it”, 255.
Appendix 257
9
A Hilbert space quantum mechanics and quantum logic 259
A.1 Quantum mechanics 259
A.2 Quantum logic 262
A.3 Diagrammatical representation, blocks, complementarity 264
A.4 Realizations of two-dimensional beam splitters 265
A.5 Two particle correlations 268
Bibliography 275
Index 291
List of Figures
4.1 Coordinazation of vectors: (a) some primitive vector; (b) some primitive
vectors, laid out in some space, denoted by dotted lines (c) vector coordi-
nates x1 and x2 of the vector x = (x1, x2) = x1e1 + x2e2 in a standard ba-
sis; (d) vector coordinates x ′1 and x ′
2 of the vector x = (x ′1, x ′
2) = x ′1e′1+x ′
2e′2in some nonorthogonal basis. 42
4.2 Gram-Schmidt construction for two nonorthogonal vectors x1 and x2, yield-
ing two orthogonal vectors y1 and y2. 44
4.3 Basis change by rotation of ϕ= π4 around the origin. 60
4.4 More general basis change by rotation. 61
9.1 Integration path to compute the Fourier transform of the Gaussian. 147
10.1Plot of a test function ϕ(x). 153
10.2Dirac’s δ-function as a “needle shaped” generalized function. 159
10.3Delta sequence approximating Dirac’s δ-function as a more and more “nee-
dle shaped” generalized function. 159
10.4Plot of the Heaviside step function H(x). 171
10.5Plot of the sign function sgn(x). 175
10.6Plot of the absolute value |x|. 175
10.7Composition of f (x) 181
11.1Plot of the two paths reqired for solving the Fourier integral. 190
11.2Plot of the path reqired for solving the Fourier integral. 192
A.1 A universal quantum interference device operating on a qubit can be re-
alized by a 4-port interferometer with two input ports 0,1 and two output
ports 0′,1′; a) realization by a single beam splitter S(T ) with variable trans-
mission T and three phase shifters P1,P2,P3; b) realization by two 50:50
beam splitters S1 and S2 and four phase shifters P1,P2,P3,P4. 266
A.2 Coordinate system for measurements of particles travelling along 0Z 269
A.3 Planar geometric demonstration of the classical two two-state particles cor-
relation. 270
12
A.4 Simultaneous spin state measurement of the two-partite state represented
in Eq. (A.27). Boxes indicate spin state analyzers such as Stern-Gerlach ap-
paratus oriented along the directions θ1,ϕ1 and θ2,ϕ2; their two output
ports are occupied with detectors associated with the outcomes “+” and
“−”, respectively. 272
List of Tables
12.1Some varieties of differential equations expressible as Sturm-Liouville dif-
ferential equations 202
A.1 Comparison of the identifications of lattice relations and operations for
the lattices of subsets of a set, for experimental propositional calculi, for
Hilbert lattices, and for lattices of commuting projection operators. 262
Introduction
T H I S I S A F I R S T AT T E M P T to provide some written material of a course “It is not enough to have no concept, onemust also be capable of expressing it.” Fromthe German original in Karl Kraus, DieFackel 697, 60 (1925): “Es genügt nicht,keinen Gedanken zu haben: man muss ihnauch ausdrücken können.”
in mathemathical methods of theoretical physics. I have presented this
course to an undergraduate audience at the Vienna University of Technol-
ogy. Only God knows (see Ref. 1 part one, question 14, article 13; and also
1 Thomas Aquinas. Summa Theologica.Translated by Fathers of the EnglishDominican Province. Christian ClassicsEthereal Library, Grand Rapids, MI, 1981.URL http://www.ccel.org/ccel/
aquinas/summa.html
Ref. 2, p. 243) if I have succeeded to teach them the subject! I kindly ask the
2 Ernst Specker. Die Logik nicht gleichzeitigentscheidbarer Aussagen. Dialectica, 14(2-3):239–246, 1960. D O I : 10.1111/j.1746-8361.1960.tb00422.x. URL http://dx.
doi.org/10.1111/j.1746-8361.1960.
tb00422.x
perplexed to please be patient, do not panic under any circumstances, and
do not allow themselves to be too upset with mistakes, omissions & other
problems of this text. At the end of the day, everything will be fine, and in
the long run we will be dead anyway.
I A M R E L E A S I N G T H I S text to the public domain because it is my convic-
tion and experience that content can no longer be held back, and access to
it be restricted, as its creators see fit. On the contrary, we experience a push
toward so much content that we can hardly bear this information flood, so
we have to be selective and restrictive rather than aquisitive. I hope that
there are some readers out there who actually enjoy and profit from the
text, in whatever form and way they find appropriate.
S U C H U N I V E R S I T Y T E X T S A S T H I S O N E – and even recorded video tran-
scripts of lectures – present a transitory, almost outdated form of teaching.
Future generations of students will most likely enjoy massive open online
courses (MOOCs) that might integrate interactive elements and will allow a
more individualized – and at the same time automated – form of learning.
What is most important from the viewpoint of university administrations
is that (i) MOOCs are cost-effective (that is, cheaper than standard tuition)
and (ii) the know-how of university teachers and researchers gets trans-
ferred to the university administration and management. In both these
ways, MOOCs are the implementation of assembly line methods (first
introduced by Henry Ford for the production of affordable cars) in the uni-
versity setting. They will transform universites and schools as much as the
Ford Motor Company (NYSE:F) has transformed the car industry.
TO N E W C O M E R S in the area of theoretical physics (and beyond) I strongly
16 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
recommend to consider and acquire two related proficiencies: If you excuse a maybe utterly dis-placed comparison, this might betantamount only to studying theAustrian family code (“Ehegesetz”)from §49 onward, available throughhttp://www.ris.bka.gv.at/Bundesrecht/
before getting married.
• to learn to speak and publish in LATEX and BibTeX. LATEX’s various di-
alects and formats, such as REVTeX, provide a kind of template for
structured scientific texts, thereby assisting you writing and publishing
consistently and with methodologic rigour;
• to subsribe to and browse through preprints published at the website
arXiv.org, which provides open access to more than three quarters of
a million scientific texts; most of them written in and compiled by LATEX.
Over time, this database has emerged as a de facto standard from the
initiative of an individual researcher working at the Los Alamos National
Laboratory (the site at which also the first nuclear bomb has been de-
veloped and assembled). Presently it happens to be administered by
Cornell University. I suspect (this is a personal subjective opinion) that
(the successors of) arXiv.org will eventually bypass if not supersede
most scientific journals of today.
It may come as no surprise that this very text is written in LATEX and pub-
lished by arXiv.org under eprint number arXiv:1203.4558, accessible
freely via http://arxiv.org/abs/1203.4558.
M Y OW N E N C O U N T E R with many researchers of different fields and differ-
ent degrees of formalization has convinced me that there is no single way
of formally comprehending a subject 3. With regards to formal rigour, there 3 Philip W. Anderson. More is different.Science, 177(4047):393–396, August 1972.D O I : 10.1126/science.177.4047.393. URLhttp://dx.doi.org/10.1126/science.
177.4047.393
appears to be a rather questionable chain of contempt – all too often theo-
retical physicists look upon the experimentalists suspiciously, mathemati-
cal physicists look upon the theoreticians skeptically, and mathematicians
look upon the mathematical physicists dubiously. I have even experienced
the distrust formal logicians expressed about their collegues in mathemat-
ics! For an anectodal evidence, take the claim of a prominant member of
the mathematical physics community, who once dryly remarked in front of
a fully packed audience, “what other people call ‘proof’ I call ‘conjecture’!”
S O P L E A S E B E AWA R E that not all I present here will be acceptable to ev-
erybody; for various reasons. Some people will claim that I am too confus-
ing and utterly formalistic, others will claim my arguments are in desparate
need of rigour. Many formally fascinated readers will demand to go deeper
into the meaning of the subjects; others may want some easy-to-identify
pragmatic, syntactic rules of deriving results. I apologise to both groups
from the onset. This is the best I can do; from certain different perspec-
tives, others, maybe even some tutors or students, might perform much
better.
I A M C A L L I N G for more tolerance and a greater unity in physics; as well as
for a greater esteem on “both sides of the same effort;” I am also opting for
CONTENTS 17
more pragmatism; one that acknowledges the mutual benefits and oneness
of theoretical and empirical physical world perceptions. Schrödinger 4 4 Erwin Schrödinger. Nature and theGreeks. Cambridge University Press,Cambridge, 1954
cites Democritus with arguing against a too great separation of the intellect
(διανoια, dianoia) and the senses (αισθησεις, aitheseis). In fragment D
125 from Galen 5, p. 408, footnote 125 , the intellect claims “ostensibly 5 Hermann Diels. Die Fragmente derVorsokratiker, griechisch und deutsch.Weidmannsche Buchhandlung, Berlin,1906. URL http://www.archive.org/
details/diefragmentederv01dieluoft
there is color, ostensibly sweetness, ostensibly bitterness, actually only
atoms and the void;” to which the senses retort: “Poor intellect, do you
hope to defeat us while from us you borrow your evidence? Your victory is
your defeat.” German: Nachdem D. [[Demokri-tos]] sein Mißtrauen gegen dieSinneswahrnehmungen in dem Satzeausgesprochen: ‘Scheinbar (d. i. konven-tionell) ist Farbe, scheinbar Süßigkeit,scheinbar Bitterkeit: wirklich nur Atomeund Leeres” läßt er die Sinne gegen denVerstand reden: ‘Du armer Verstand, vonuns nimmst du deine Beweisstücke undwillst uns damit besiegen? Dein Sieg istdein Fall!’
In his 1987 Abschiedsvorlesung professor Ernst Specker at the Eidgenös-
sische Hochschule Zürich remarked that the many books authored by David
Hilbert carry his name first, and the name(s) of his co-author(s) second, al-
though the subsequent author(s) had actually written these books; the only
exception of this rule being Courant and Hilbert’s 1924 book Methoden der
mathematischen Physik, comprising around 1000 densly packed pages,
which allegedly none of these authors had really written. It appears to be
some sort of collective efforts of scholar from the University of Göttingen.
So, in sharp distinction from these activities, I most humbly present my
own version of what is important for standard courses of contemporary
physics. Thereby, I am quite aware that, not dissimilar with some attempts
of that sort undertaken so far, I might fail miserably. Because even if I
manage to induce some interest, affaction, passion and understanding in
the audience – as Danny Greenberger put it, inevitably four hundred years
from now, all our present physical theories of today will appear transient 6, 6 Imre Lakatos. Philosophical Papers.1. The Methodology of Scientific ResearchProgrammes. Cambridge University Press,Cambridge, 1978
if not laughable. And thus in the long run, my efforts will be forgotten; and
some other brave, courageous guy will continue attempting to (re)present
the most important mathematical methods in theoretical physics.
H AV I N G I N M I N D this saddening piece of historic evidence, and for as long
as we are here on Earth, let us carry on and start doing what we are sup-
posed to be doing well; just as Krishna in Chapter XI:32,33 of the Bhagavad
Gita is quoted for insisting upon Arjuna to fight, telling him to “stand up,
obtain glory! Conquer your enemies, acquire fame and enjoy a prosperous
kingdom. All these warriors have already been destroyed by me. You are only
an instrument.”
o
Part I:
Metamathematics and Metaphysics
1
Unreasonable effectiveness of mathematics in the natu-
ral sciences
All things considered, it is mind-boggling why formalized thinking and
numbers utilize our comprehension of nature. Even today eminent re-
searchers muse about the “unreasonable effectiveness of mathematics in the
natural sciences” 1. 1 Eugene P. Wigner. The unreasonableeffectiveness of mathematics in thenatural sciences. Richard Courant Lecturedelivered at New York University, May11, 1959. Communications on Pure andApplied Mathematics, 13:1–14, 1960. D O I :10.1002/cpa.3160130102. URL http:
//dx.doi.org/10.1002/cpa.3160130102
Zeno of Elea and Parmenides, for instance, wondered how there can
be motion if our universe is either infinitely divisible or discrete. Because,
in the dense case (between any two points there is another point), the
slightest finite move would require an infinity of actions. Likewise in the
discrete case, how can there be motion if everything is not moving at all
times 2? A related burlesque question is about the physical limit state of 2 H. D. P. Lee. Zeno of Elea. CambridgeUniversity Press, Cambridge, 1936; PaulBenacerraf. Tasks and supertasks, and themodern Eleatics. Journal of Philosophy,LIX(24):765–784, 1962. URL http://
www.jstor.org/stable/2023500;A. Grünbaum. Modern Science and Zeno’sparadoxes. Allen and Unwin, London,second edition, 1968; and Richard MarkSainsbury. Paradoxes. CambridgeUniversity Press, Cambridge, UnitedKingdom, third edition, 2009. ISBN0521720796
a hypothetical lamp with ever decreasing switching cycles discussed by
Thomson 3.
3 James F. Thomson. Tasks and supertasks.Analysis, 15:1–13, October 1954
For the sake of perplexion, take Neils Henrik Abel’s verdict denounc-
ing that (Letter to Holmboe, January 16, 1826 4), “divergent series are the
4 Godfrey Harold Hardy. Divergent Series.Oxford University Press, 1949
invention of the devil, and it is shameful to base on them any demonstra-
tion whatsoever.” This, of course, did neither prevent Abel nor too many
other discussants to investigate these devilish inventions. If one encodes
the physical states of the Thomson lamp by “0” and “1,” associated with
the lamp “on” and “off,” respectively, and the switching process with the
concatenation of “+1” and “-1” performed so far, then the divergent infinite
series associated with the Thomson lamp is the Leibniz series
s =∞∑
n=0(−1)n = 1−1+1−1+1−·· · A= 1
1− (−1)= 1
2(1.1)
which is just a particular instance of a geometric series (see below) with
the common ratio “-1.” Here, “A” indicates the Abel sum 5 obtained from a 5 Godfrey Harold Hardy. Divergent Series.Oxford University Press, 1949“continuation” of the geometric series, or alternatively, by s = 1− s.
As this shows, formal sums of the Leibnitz type (1.1) require specifi-
cations which could make them unique. But has this “specification by
continuation” any kind of physical meaning?
22 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
In modern days, similar arguments have been translated into the pro-
posal for infinity machines by Blake 6, p. 651, and Weyl 7, pp. 41-42, which 6 R. M. Blake. The paradox of temporalprocess. Journal of Philosophy, 23(24):645–654, 1926. URL http://www.jstor.
org/stable/20138137 Hermann Weyl. Philosophy of Mathe-matics and Natural Science. PrincetonUniversity Press, Princeton, NJ, 1949
could solve many very difficult problems by searching through unbounded
recursively enumerable cases. To achive this physically, ultrarelativistic
methods suggest to put observers in “fast orbits” or throw them toward
black holes 8.
8 Itamar Pitowsky. The physical Church-Turing thesis and physical computationalcomplexity. Iyyun, 39:81–99, 1990
The Pythagoreans are often cited to have believed that the universe
is natural numbers or simple fractions thereof, and thus physics is just a
part of mathematics; or that there is no difference between these realms.
They took their conception of numbers and world-as-numbers so seriously
that the existence of irrational numbers which cannot be written as some
ratio of integers shocked them; so much so that they allegedly drowned
the poor guy who had discovered this fact. That appears to be a saddening
case of a state of mind in which a subjective metaphysical belief in and
wishful thinking about one’s own constructions of the world overwhelms
critical thinking; and what should be wisely taken as an epistemic finding
is taken to be ontologic truth. It might thus be prudent to adopt a contem-
plative strategy of evenly-suspended attention outlined by Freud 9, who 9 Sigmund Freud. Ratschläge für den Arztbei der psychoanalytischen Behandlung. InAnna Freud, E. Bibring, W. Hoffer, E. Kris,and O. Isakower, editors, GesammelteWerke. Chronologisch geordnet. AchterBand. Werke aus den Jahren 1909–1913,pages 376–387, Frankfurt am Main, 1999.Fischer
admonishes analysts to be aware of the dangers caused by “temptations
to project, what [the analyst] in dull self-perception recognizes as the pe-
culiarities of his own personality, as generally valid theory into science.”
Nature is thereby treated as a client-patient, and whatever findings come
up are accepted as is without any immediate emphasis or judgment. This
also alleviates the dangers of becoming embittered with the reactions of
“the peers,” a problem sometimes encountered when “surfing on the edge”
of contemporary knowledge; such as, for example, Everett’s case 10. 10 Hugh Everett III. The Everett interpre-tation of quantum mechanics: Collectedworks 1955-1980 with commentary.Princeton University Press, Princeton,NJ, 2012. ISBN 9780691145075. URLhttp://press.princeton.edu/titles/
9770.html
The relationship between physics and formalism has been debated by
Bridgman 11, Feynman 12, and Landauer 13, among many others. It has
11 Percy W. Bridgman. A physicist’s sec-ond reaction to Mengenlehre. ScriptaMathematica, 2:101–117, 224–234, 193412 Richard Phillips Feynman. The Feynmanlectures on computation. Addison-WesleyPublishing Company, Reading, MA, 1996.edited by A.J.G. Hey and R. W. Allen13 Rolf Landauer. Information is physical.Physics Today, 44(5):23–29, May 1991.D O I : 10.1063/1.881299. URL http:
//dx.doi.org/10.1063/1.881299
many twists, anecdotes and opinions. Take, for instance, Heaviside’s not
uncontroversial stance 14 on it:
14 Oliver Heaviside. Electromagnetictheory. “The Electrician” Printing andPublishing Corporation, London, 1894-1912. URL http://archive.org/
details/electromagnetict02heavrich
I suppose all workers in mathematical physics have noticed how the mathe-
matics seems made for the physics, the latter suggesting the former, and that
practical ways of working arise naturally. . . . But then the rigorous logic of the
matter is not plain! Well, what of that? Shall I refuse my dinner because I do
not fully understand the process of digestion? No, not if I am satisfied with the
result. Now a physicist may in like manner employ unrigorous processes with
satisfaction and usefulness if he, by the application of tests, satisfies himself of
the accuracy of his results. At the same time he may be fully aware of his want
of infallibility, and that his investigations are largely of an experimental char-
acter, and may be repellent to unsympathetically constituted mathematicians
accustomed to a different kind of work. [§225]
And here is an opinion from “the other end of the spectrum” spanned
by mathematical formalism on the one hand, and application technology
on the other end: Dietrich Küchemann, the ingenious German-British
aerodynamicist and one of the main contributors to wing design of the
U N R E A S O N A B L E E F F E C T I V E N E S S O F M AT H E M AT I C S I N T H E N AT U R A L S C I E N C E S 23
Concord supersonic civil aercraft, tells us 15 15 Dietrich Küchemann. The AerodynamicDesign of Aircraft. Pergamon Press, Oxford,1978[Again,] the most drastic simplifying assumptions must be made before we can
even think about the flow of gases and arrive at equations which are amenable
to treatment. Our whole science lives on highly-idealised concepts and in-
genious abstractions and approximations. We should remember this in all
modesty at all times, especially when somebody claims to have obtained “the
right answer” or “the exact solution”. At the same time, we must acknowledge
and admire the intuitive art of those scientists to whom we owe the many
useful concepts and approximations with which we work [page 23].
Note that one of the most successful physical theories in terms of pre-
dictive powers, perturbative quantum electrodynamics, deals with diver-
gent series 16 which contribute to physical quantities such as mass and 16 Freeman J. Dyson. Divergence of pertur-bation theory in quantum electrodynam-ics. Phys. Rev., 85(4):631–632, Feb 1952.D O I : 10.1103/PhysRev.85.631. URL http:
//dx.doi.org/10.1103/PhysRev.85.631
charge which have to be “regularizied” by subtracting infinities by hand
(for an alternative approach, see 17).
17 Günter Scharf. Finite Quantum Electro-dynamics: The Causal Approach. Springer,Berlin, Heidelberg, second edition, 1989,1995
The question, for instance, is imminent whether we should take the
formalism very serious and literal, using it as a guide to new territories,
which might even appear absurd, inconsistent and mind-boggling; just like
Alice’s Adventures in Wonderland. Should we expect that all the wild things
formally imaginable have a physical realization?
Note that the formalist Hilbert 18, p. 170, is often quoted as claiming 18 David Hilbert. Über das Unendliche.Mathematische Annalen, 95(1):161–190, 1926. D O I : 10.1007/BF01206605.URL http://dx.doi.org/10.1007/
BF01206605; and Georg Cantor. Beiträgezur Begründung der transfiniten Men-genlehre. Mathematische Annalen,46(4):481–512, November 1895. D O I :10.1007/BF02124929. URL http:
//dx.doi.org/10.1007/BF02124929
that nobody shall ever expel mathematicians from the paradise created
by Cantor’s set theory. In Cantor’s “naive set theory” definition, “a set is
German original: “Aus dem Paradies, dasCantor uns geschaffen, soll uns niemandvertreiben können.”
a collection into a whole of definite distinct objects of our intuition or of
our thought. The objects are called the elements (members) of the set.” If
Cantor’s German original: “Unter einer“Menge” verstehen wir jede Zusammen-fassung M von bestimmten wohlunter-schiedenen Objekten m unsrer Anschau-ung oder unseres Denkens (welche die“Elemente” von M genannt werden) zueinem Ganzen.”
one allows substitution and self-reference 19, this definition turns out to be
19 Raymond M. Smullyan. What is theName of This Book? Prentice-Hall, Inc., En-glewood Cliffs, NJ, 1992a; and Raymond M.Smullyan. Gödel’s Incompleteness Theo-rems. Oxford University Press, New York,New York, 1992b
inconsistent; that is self-contradictory – for instance Russel’s paradoxical
“set of all sets that are not members of themselves” qualifies as set in the
Cantorian approach. In praising the set theoretical paradise, Hilbert must
have been well aware of the inconsistencies and problems that plagued
Cantorian style set theory, but he fully dissented and refused to abandon
its stimulus.
Is there a similar pathos also in theoretical physics?
Maybe our physical capacities are limited by our mathematical fantasy
alone? Who knows?
For instance, could we make use of the Banach-Tarski paradox 20 as a
20 Robert French. The Banach-Tarskitheorem. The Mathematical Intelligencer,10:21–28, 1988. ISSN 0343-6993. D O I :10.1007/BF03023740. URL http://dx.
doi.org/10.1007/BF03023740; andStan Wagon. The Banach-Tarski Paradox.Cambridge University Press, Cambridge,1986
sort of ideal production line? The Banach-Tarski paradox makes use of the
fact that in the continuum “it is (nonconstructively) possible” to transform
any given volume of three-dimensional space into any other desired shape,
form and volume – in particular also to double the original volume – by
transforming finite subsets of the original volume through isometries, that
is, distance preserving mappings such as translations and rotations. This,
of course, could also be perceived as a merely abstract paradox of infinity,
somewhat similar to Hilbert’s hotel.
By the way, Hilbert’s hotel 21 has a countable infinity of hotel rooms. It21 Rudy Rucker. Infinity and the Mind.Birkhäuser, Boston, 1982
24 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
is always capable to acommodate a newcomer by shifting all other guests
residing in any given room to the room with the next room number. Maybe
we will never be able to build an analogue of Hilbert’s hotel, but maybe we
will be able to do that one far away day. Anton Zeilinger has quoted Tony Kleinas saying that “every system is a perfectsimulacrum of itself.”
After all, science finally succeeded to do what the alchemists sought for
so long: we are capable of producing gold from mercury 22. 22 R. Sherr, K. T. Bainbridge, and H. H.Anderson. Transmutation of mer-cury by fast neutrons. Physical Re-view, 60(7):473–479, Oct 1941. D O I :10.1103/PhysRev.60.473. URL http:
//dx.doi.org/10.1103/PhysRev.60.473
o
2
Methodology and proof methods
F O R M A N Y T H E O R E M S there exist many proofs. Consider this: the 4th
edition of Proofs from THE BOOK 1 lists six proofs of the infinity of primes 1 Martin Aigner and Günter M. Ziegler.Proofs from THE BOOK. Springer,Heidelberg, four edition, 1998-2010.ISBN 978-3-642-00855-9. URLhttp://www.springerlink.com/
content/978-3-642-00856-6
(chapter 1). Chapter 19 refers to nearly a hundred proofs of the fundamen-
tal theorem of algebra, that every nonconstant polynomial with complex
coefficients has at least one root in the field of complex numbers.
W H I C H P RO O F S , if there exist many, somebody choses or prefers is often
a question of taste and elegance, and thus a subjective decision. Some
proofs are constructive 2 and computable 3 in the sense that a construc- 2 Douglas Bridges and F. Richman. Varietiesof Constructive Mathematics. CambridgeUniversity Press, Cambridge, 1987; andE. Bishop and Douglas S. Bridges. Con-structive Analysis. Springer, Berlin, 19853 Oliver Aberth. Computable Analysis.McGraw-Hill, New York, 1980; KlausWeihrauch. Computable Analysis. AnIntroduction. Springer, Berlin, Heidelberg,2000; and Vasco Brattka, Peter Hertling,and Klaus Weihrauch. A tutorial oncomputable analysis. In S. Barry Cooper,Benedikt Löwe, and Andrea Sorbi, editors,New Computational Paradigms: ChangingConceptions of What is Computable, pages425–491. Springer, New York, 2008
tion method is presented. Tractability is not an entirely different issue 4 –
4 Georg Kreisel. A notion of mechanis-tic theory. Synthese, 29:11–26, 1974.D O I : 10.1007/BF00484949. URL http:
//dx.doi.org/10.1007/BF00484949;Robin O. Gandy. Church’s thesis and prin-ciples for mechanics. In J. Barwise, H. J.Kreisler, and K. Kunen, editors, The KleeneSymposium. Vol. 101 of Studies in Logicand Foundations of Mathematics, pages123–148. North Holland, Amsterdam, 1980;and Itamar Pitowsky. The physical Church-Turing thesis and physical computationalcomplexity. Iyyun, 39:81–99, 1990
note that even “higher” polynomial growth of temporal requirements, or of
space and memory resources, of a computation with some parameter char-
acteristic for the problem, may result in a solution which is unattainable
“for all practical purposes” (fapp) 5.
5 John S. Bell. Against ‘measurement’.Physics World, 3:33–41, 1990. URL http:
//physicsworldarchive.iop.org/
summary/pwa-xml/3/8/phwv3i8a26
F O R T H O S E O F U S with a rather limited amount of storage and memory,
and with a lot of troubles and problems, it is quite consolating that it is not
(always) necessary to be able to memorize all the proofs that are necessary
for the deduction of a particular corollary or theorem which turns out to
be useful for the physical task at hand. In some cases, though, it may be
necessary to keep in mind the assumptions and derivation methods that
such results are based upon. For example, how many readers may be able
to immediately derive the simple power rule for derivation of polynomials
– that is, for any real coefficient a, the derivative is given by (r a)′ = ar a−1?
While I suspect that not too many may be able derive this formula without
consulting additional help (a help: one could use the binomial theorem),
many of us would nevertheless acknowledge to be aware of, and be able
and happy to apply, this rule.
L E T U S J U S T M E N T I O N some concrete examples of the perplexing varieties
of proof methods used today.
26 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For the sake of mentioning a mathematical proof method which does
not have any “constructive” or algorithmic flavour, consider a proof of the
following theorem: “There exist irrational numbers x, y ∈R−Qwith x y ∈Q.”
Consider the following proof:
case 1:p
2p
2 ∈Q;
case 2:p
2p
2 6∈Q, thenp
2p
2p
2
=(p
2p
2)p2
= (p2)(p
2p
2) =p2
2 = 2 ∈Q.
The proof assumes the law of the excluded middle, which excludes all
other cases but the two just listed. The question of which one of the two
cases is correct; that is, which number is rational, remains unsolved in the
context of the proof. – Actually, a proof that case 2 is correct andp
2p
2is a
transcendential was only found by Gelfond and Schneider in 1934! The Gelfond-Schneider theorem statesthat, if n and m are algebraic numbersthat is, if n and m are roots of a non-zeropolynomial in one variable with rationalor equivalently, integer, coefficients withn 6= 0,1 and if m is not a rational number,then any value of nm = em logn is atranscendental number.
A T Y P I C A L P RO O F B Y C O N T R A D I C T I O N is about the irrationality ofp
2.
Suppose thatp
2 is rational (false); that isp
2 = nm for some n,m ∈ N.
Suppose further that n and m are coprime; that is, that they have no com-
mon positive (integer) divisor other than 1 or, equivalently, suppose that
their greatest common (integer) divisor is 1. Squaring the (wrong) assump-
tionp
2 = nm yields 2 = n2
m2 and thus n2 = 2m2. We have two different cases:
either n is odd, or n is even.
case 1: suppose that n is odd; that is n = (2k+1) for some k ∈N; and thus
n2 = 4k2 +2k +1 is again odd (the square of an odd number is odd again);
but that cannot be, since n2 equals 2m2 and thus should be even; hence we
arrive at a contradiction.
case 2: suppose that n is even; that is n = 2k for some k ∈ N; and thus
4k2 = 2m2 or 2k2 = m2. Now observe that by assumption, m cannot be
even (remember n and m are coprime, and n is assumed to be even), so
m must be odd. By the same argument as in case 1 (for odd n), we arrive
at a contradiction. By combining these two exhaustive cases 1 & 2, we
arrive at a complete contradiction; the only consistent alternative being the
irrationality ofp
2.
S T I L L A N OT H E R I S S U E is whether it is better to have a proof of a “true”
mathematical statement rather than none. And what is truth – can it
be some revelation, a rare gift, such as seemingly in Srinivasa Aiyangar
Ramanujan’s case?
T H E R E E X I S T A N C I E N T and yet rather intuitive – but sometimes distract-
ing and errorneous – informal notions of proof. An example 6 is the Baby- 6 M. Baaz. Über den allgemeinen Gehaltvon Beweisen. In Contributions to GeneralAlgebra, volume 6, pages 21–29, Vienna,1988. Hölder-Pichler-Tempsky
lonian notion to “prove” arithmetical statements by considering “large
number” cases of algebraic formulae such as (Chapter V of Ref. 7), for7 Otto Neugebauer. Vorlesungen über dieGeschichte der antiken mathematischenWissenschaften. 1. Band: VorgriechischeMathematik. Springer, Berlin, 1934. page172
n ≥ 1,n∑
i=1i 2 = 1
3(1+2n)
n∑i=1
i . (2.1)
M E T H O D O L O G Y A N D P RO O F M E T H O D S 27
The Babylonians convinced themselves that is is correct maybe by first
cautiously inserting small numbers; say, n = 1:
1∑i=1
i 2 = 12 = 1, and
1
3(1+2)
1∑i=1
i = 3
31 = 1,
and n = 3:
3∑i=1
i 2 = 1+4+9 = 14, and
1
3(1+6)
3∑i=1
i = 7
3(1+2+3) = 7×6
3= 14;
and then by taking the bold step to test this identity with something big-
ger, say n = 100, or something “real big,” such as the Bell number prime
1298074214633706835075030044377087 (check it out yourself); or some-
thing which “looks random” – although randomness is a very elusive qual-
ity, as Ramsey theory shows – thus coming close to what is a probabilistic Ramsey theory can be interpreted as ex-pressing that there is no true randomness,irrespective of the method used to pro-duce it; or, stated differently by Motzkin, a“complete disorder is an impossibility. Anystructure will necessarily contain an orderlysubstructure.”
Alexander Soifer. Ramsey theory beforeramsey, prehistory and early history: Anessay in 13 parts. In Alexander Soifer,editor, Ramsey Theory, volume 285 ofProgress in Mathematics, pages 1–26.Birkhäuser Boston, 2011. ISBN 978-0-8176-8091-6. D O I : 10.1007/978-0-8176-8092-3_1. URL http://dx.doi.org/10.1007/
978-0-8176-8092-3_1
proof.
As naive and silly this babylonian “proof” method may appear at first
glance – for various subjective reasons (e.g. you may have some suspicions
with regards to particular deductive proofs and their results; or you sim-
ply want to check the correctness of the deductive proof) it can be used to
“convince” students and ourselves that a result which has derived deduc-
tively is indeed applicable and viable. We shall make heavy use of these
kind of intuitive examples. As long as one always keeps in mind that this
inductive, merely anecdotal, method is necessary but not sufficient (suffi-
ciency is, for instance, guaranteed by mathematical induction) it is quite all
right to go ahead with it.
M AT H E M AT I C A L I N D U C T I O N presents a way to ascertain certain identi-
ties, or relations, or estimations in a two-step rendition that represents a
potential infinity of steps by (i) directly verifying some formula “babyloni-
cally,” that is, by direct insertion for some “small number” called the basis
of induction , and then (ii) by verifying the inductive step. For any finite
number m, we can then inductively verify the expression by starting with
the basis of induction, which we have exlicitly checked, and then taking
successive values of n until m is reached.
For a demonstration of induction, consider again the babylonian ex-
ample (2.1) mentioned earlier. In the first step (i), a basis is easily verified
by taking n = 1. In the second step (ii) , we substitute n + 1 for n in (2.1),
28 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
thereby obtaining
n+1∑i=1
i 2 = 1
3[1+2(n +1)]
n+1∑i=1
i ,
n∑i=1
i 2 + (n +1)2 = 1
3(1+2n +2)
[n∑
i=1i + (n +1)
],
n∑i=1
i 2 + (n +1)2 = 1
3(1+2n)
n∑i=1
i + 2
3
n∑i=1
i︸︷︷︸n(n+1)
2
+1
3(3+2n) (n +1),
n∑i=1
i 2 + (n +1)2 = 1
3(1+2n)
n∑i=1
i + n(n +1)
3+ 2n2 +3n +2n +3
3,
n∑i=1
i 2 + (n +1)2 = 1
3(1+2n)
n∑i=1
i + n2 +n
3+ 2n2 +5n +3
3,
n∑i=1
i 2 + (n +1)2 = 1
3(1+2n)
n∑i=1
i + 3n2 +6n +3
3,
n∑i=1
i 2 + (n +1)2 = 1
3(1+2n)
n∑i=1
i +n2 +2n +1︸ ︷︷ ︸(n+1)2
,
n∑i=1
i 2 = 1
3(1+2n)
n∑i=1
i .
(2.2)
In that way, we can think of validating any finite case n = m by inductively
verifying successive values of n from n = 1 onwards, until m is reached.
A N OT H E R A LTO G E T H E R D I F F E R E N T I S S U E is knowledge acquired by
revelation or by some authority. Oracles occur in modern computer sci-
ence, but only as idealized concepts whose physical realization is highly
questionable if not forbidden.
L E T U S S H O RT LY E N U M E R AT E S O M E P RO O F M E T H O D S, among others:
1. (indirect) proof by contradiction;
2. proof by mathematical induction;
3. direct proof;
4. proof by construction;
5. nonconstructive proof.
T H E C O N T E M P O R A RY notion of proof is formalized and algorithmic.
Around 1930 mathematicians could still hope for a “mathematical the-
ory of everything” which consists of a finite number of axioms and algo-
rithmic derivation rules by which all true mathematical statements could
formally be derived. In particular, as expressed in Hilbert’s 2nd problem
(Hilbert, 1902), it should be possible to prove the consistency of the axioms
M E T H O D O L O G Y A N D P RO O F M E T H O D S 29
of arithmetic. Hence, Hilbert and other formalists dreamed, any such for-
mal system (in German “Kalkül”) consisting of axioms and derivation rules,
might represent “the essence of all mathematical truth.” This approach, as
curageous as it appears, was doomed.
G Ö D E L 8, Tarski 9, and Turing 10 put an end to the formalist program. 8 Kurt Gödel. Über formal unentscheidbareSätze der Principia Mathematica undverwandter Systeme. Monatshefte fürMathematik und Physik, 38(1):173–198,1931. D O I : 10.1007/s00605-006-0423-7.URL http://dx.doi.org/10.1007/
s00605-006-0423-79 Alfred Tarski. Der Wahrheitsbegriff inden Sprachen der deduktiven Disziplinen.Akademie der Wissenschaften in Wien.Mathematisch-naturwissenschaftlicheKlasse, Akademischer Anzeiger, 69:9–12,193210 A. M. Turing. On computable numbers,with an application to the Entschei-dungsproblem. Proceedings of theLondon Mathematical Society, Series2, 42, 43:230–265, 544–546, 1936-7 and1937. D O I : 10.1112/plms/s2-42.1.230,10.1112/plms/s2-43.6.544. URLhttp://dx.doi.org/10.1112/plms/
s2-42.1.230,http://dx.doi.org/10.
1112/plms/s2-43.6.544
They coded and formalized the concepts of proof and computation in
general, equating them with algorithmic entities. Today, in times when
universal computers are everywhere, this may seem no big deal; but in
those days even coding was challenging – in his proof of the undecidability
of (Peano) arithmetic, Gödel used the uniqueness of prime decompositions
to explicitly code mathematical formulæ!
F O R T H E S A K E of exploring (algorithmically) these ideas let us consider the
sketch of Turing’s proof by contradiction of the unsolvability of the halting
problem. The halting problem is about whether or not a computer will
eventually halt on a given input, that is, will evolve into a state indicating
the completion of a computation task or will stop altogether. Stated differ-
ently, a solution of the halting problem will be an algorithm that decides
whether another arbitrary algorithm on arbitrary input will finish running
or will run forever.
The scheme of the proof by contradiction is as follows: the existence of a
hypothetical halting algorithm capable of solving the halting problem will
be assumed. This could, for instance, be a subprogram of some suspicious
supermacro library that takes the code of an arbitrary program as input
and outputs 1 or 0, depending on whether or not the program halts. One
may also think of it as a sort of oracle or black box analyzing an arbitrary
program in terms of its symbolic code and outputting one of two symbolic
states, say, 1 or 0, referring to termination or nontermination of the input
program, respectively.
On the basis of this hypothetical halting algorithm one constructs an-
other diagonalization program as follows: on receiving some arbitrary
input program code as input, the diagonalization program consults the
hypothetical halting algorithm to find out whether or not this input pro-
gram halts; on receiving the answer, it does the opposite: If the hypothetical
halting algorithm decides that the input program halts, the diagonalization
program does not halt (it may do so easily by entering an infinite loop).
Alternatively, if the hypothetical halting algorithm decides that the input
program does not halt, the diagonalization program will halt immediately.
The diagonalization program can be forced to execute a paradoxical task
by receiving its own program code as input. This is so because, by consider-
ing the diagonalization program, the hypothetical halting algorithm steers
the diagonalization program into halting if it discovers that it does not halt;
conversely, the hypothetical halting algorithm steers the diagonalization
30 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
program into not halting if it discovers that it halts.
The complete contradiction obtained in applying the diagonalization
program to its own code proves that this program and, in particular, the
hypothetical halting algorithm cannot exist.
A universal computer can in principle be embedded into, or realized
by, certain physical systems designed to universally compute. Assuming
unbounded space and time, it follows by reduction that there exist physical
observables, in particular, forecasts about whether or not an embedded
computer will ever halt in the sense sketched earlier, that are provably
undecidable.
\
3
Numbers and sets of numbers
T H E C O N C E P T O F N U M B E R I N G T H E U N I V E R S E is far from trivial. In par-
ticular it is far from trivial which number schemes are appropriate. In the
pythagorean tradition the natural numbers appear to be most natural. Ac-
tually Leibnitz (among others like Bacon before him) argues that just two
number, say, “0” and “1,” are enough to creat all of other numbers, and
thus all of the Universe 1. 1 Karl Svozil. Computational universes.Chaos, Solitons & Fractals, 25(4):845–859,2006a. D O I : 10.1016/j.chaos.2004.11.055.URL http://dx.doi.org/10.1016/j.
chaos.2004.11.055
EV E RY P R I M A RY E M P I R I C A L E V I D E N C E seems to be based on some click
in a detector: either there is some click or there is none. Thus every empiri-
cal physical evidence is composed from such elementary events.
Thus binary number codes are in good, albeit somewhat accidential, ac-
cord with the intuition of most experimentalists today. I call it “accidential”
because quantum mechanics does not favour any base; the only criterium
is the number of mutually exclusive measurement outcomes which de-
termines the dimension of the linear vector space used for the quantum
description model – two mutually exclusive outcomes would result in a
Hilbert space of dimension two, three mutually exclusive outcomes would
result in a Hilbert space of dimension three, and so on.
T H E R E A R E , of course, many other sets of numbers imagined so far; all
of which can be considered to be encodable by binary digits. One of the
most challenging number schemes is that of the real numbers 2. It is totally 2 S. Drobot. Real Numbers. Prentice-Hall,Englewood Cliffs, New Jersey, 1964different from the natural numbers insofar as there are undenumerably
many reals; that is, it is impossible to find a one-to-one function – a sort of
“translation” – from the natural numbers to the reals.
Cantor appears to be the first having realized this. In order to proof it,
he invented what is today often called Cantor’s diagonalization technique,
or just diagonalization. It is a proof by contradiction; that is, what shall
be disproved is assumed; and on the basis of this assumption a complete
contradiction is derived.
For the sake of contradiction, assume for the moment that the set of
32 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
reals is denumerable. (This assumption will yield a contradiction.) That
is, the enumeration is a one-to-one function f : N→ R (wrong), i.e., to
any k ∈ N exists some rk ∈ R and vice versa. No algorithmic restriction is
imposed upon the enumeration, i.e., the enumeration may or may not be
effectively computable. For instance, one may think of an enumeration
obtained via the enumeration of computable algorithms and by assuming
that rk is the output of the k’th algorithm. Let 0.dk1dk2 · · · be the successive
digits in the decimal expansion of rk . Consider now the diagonal of the
array formed by successive enumeration of the reals,
r1 = 0.d11 d12 d13 · · ·r2 = 0.d21 d22 d23 · · ·r3 = 0.d31 d32 d33 · · ·...
......
.... . .
(3.1)
yielding a new real number rd = 0.d11d22d33 · · · . Now, for the sake of con-
tradiction, construct a new real r ′d by changing each one of these digits
of rd , avoiding zero and nine in a decimal expansion. This is necessary
because reals with different digit sequences are equal to each other if one
of them ends with an infinite sequence of nines and the other with zeros,
for example 0.0999. . . = 0.1. . .. The result is a real r ′ = 0.d ′1d ′
2d ′3 · · · with
d ′n 6= dnn , which differs from each one of the original numbers in at least
one (i.e., in the “diagonal”) position. Therefore, there exists at least one
real which is not contained in the original enumeration, contradicting the
assumption that all reals have been taken into account. Hence, R is not
denumerable.
Bridgman has argued 3 that, from a physical point of view, such an 3 Percy W. Bridgman. A physicist’s sec-ond reaction to Mengenlehre. ScriptaMathematica, 2:101–117, 224–234, 1934
argument is operationally unfeasible, because it is physically impossible
to process an infinite enumeration; and subsequently, quasi on top of
that, a digit switch. Alas, it is possible to recast the argument such that r ′d
is finitely created up to arbitrary operational length, as the enumeration
progresses.
\
Part II:
Linear vector spaces
4
Finite-dimensional vector spaces
V E C TO R S PAC E S are prevalent in physics; they are essential for an un- “I would have written a shorter letter,but I did not have the time.” (Literally: “Imade this [letter] very long, because I didnot have the leisure to make it shorter.”)Blaise Pascal, Provincial Letters: Letter XVI(English Translation)
derstanding of mechanics, relativity theory, quantum mechanics, and
statistical physics.
4.1 Basic definitions
In what follows excerpts from Halmos’ beautiful treatment “Finite-Dimensional
Vector Spaces” will be reviewed 1. Of course, there exist zillions of other 1 Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
very nice presentations, among them Greub’s “Linear algebra,” and Strang’s
“Introduction to Linear Algebra,” among many others, even freely down-
loadable ones 2 competing for your attention. 2 Werner Greub. Linear Algebra, volume 23of Graduate Texts in Mathematics. Springer,New York, Heidelberg, fourth edition, 1975;Gilbert Strang. Introduction to linearalgebra. Wellesley-Cambridge Press,Wellesley, MA, USA, fourth edition, 2009.ISBN 0-9802327-1-6. URL http://math.
mit.edu/linearalgebra/; HowardHomes and Chris Rorres. ElementaryLinear Algebra: Applications Version. Wiley,New York, tenth edition, 2010; SeymourLipschutz and Marc Lipson. Linear algebra.Schaum’s outline series. McGraw-Hill,fourth edition, 2009; and Jim Hefferon.Linear algebra. 320-375, 2011. URLhttp://joshua.smcvt.edu/linalg.
html/book.pdf
The more physically oriented notation in Mermin’s book on quan-
tum information theory 3 is adopted. Vectors are typed in bold face, or
3 David N. Mermin. Lecture notes onquantum computation. 2002-2008.URL http://people.ccmr.cornell.
edu/~mermin/qcomp/CS483.html; andDavid N. Mermin. Quantum ComputerScience. Cambridge University Press,Cambridge, 2007. ISBN 9780521876582.URL http://people.ccmr.cornell.edu/
~mermin/qcomp/CS483.html
in Dirac’s “bra-ket” notation. Thereby, the vector x is identified with the
“ket vector” |x⟩. The vector x∗ from the dual space (see Section 4.8 on
page 46) is identified with the “bra vector” ⟨x|. Dot (scalar or inner) prod-
ucts between two vectors x and y in Euclidean space are then denoted by
“⟨bra|(c)|ket⟩” form; that is, by ⟨x|y⟩.The overline sign stands for complex conjugation; that is, if a =ℜa+ iℑa
is a complex number, then a =ℜa − iℑa.
Unless stated differently, only finite-dimensional vector spaces are
considered.
4.1.1 Fields of real and complex numbers
In physics, scalars occur either as real or complex numbers. Thus we shall
restrict our attention to these cases.
A field ⟨F,+, ·,−,−1 ,0,1⟩ is a set together with two operations, usually
called addition and multiplication, denoted by “+” and “·” (often “a ·b”
is identified with the expression “ab” without the center dot) respectively,
such that the following conditions (or, stated differently, axioms) hold:
36 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
(i) closure of Fwith respect to addition and multiplication: for all a,b ∈ F,
both a +b and ab are in F;
(ii) associativity of addition and multiplication: for all a, b, and c in F, the
following equalities hold: a + (b + c) = (a +b)+ c, and a(bc) = (ab)c;
(iii) commutativity of addition and multiplication: for all a and b in F, the
following equalities hold: a +b = b +a and ab = ba;
(iv) additive and multiplicative identity: there exists an element of F,
called the additive identity element and denoted by 0, such that for all
a in F, a +0 = a. Likewise, there is an element, called the multiplicative
identity element and denoted by 1, such that for all a in F, 1 · a = a.
(To exclude the trivial ring, the additive identity and the multiplicative
identity are required to be distinct.)
(v) additive and multiplicative inverses: for every a in F, there exists an
element −a in F, such that a + (−a) = 0. Similarly, for any a in F other
than 0, there exists an element a−1 in F, such that a · a−1 = 1. (The ele-
ments +(−a) and a−1 are also denoted −a and 1a , respectively.) Stated
differently: subtraction and division operations exist.
(vi) Distributivity of multiplication over addition: For all a, b and c in F,
the following equality holds: a(b + c) = (ab)+ (ac).
4.1.2 Vectors and vector spaceFor proofs and additional information see§2 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
Vector spaces are merely structures allowing the sum (addition) of objects
called “vectors,” and multiplication of these objects by scalars; thereby
remaining in this structure. That is, for instance, the “coherent superposi-
tion” a+b ≡ |a+b⟩ of two vectors a ≡ |a⟩ and b ≡ |b⟩ can be guaranteed to be
a vector. At this stage, little can be said about the length or relative direc-
tion or orientation of these “vectors.” Algebraically, “vectors” are elements
of vector spaces. Geometrically a vector may be interpreted as “a quantity
which is usefully represented by an arrow” 4. 4 Gabriel Weinreich. Geometrical Vec-tors (Chicago Lectures in Physics). TheUniversity of Chicago Press, Chicago, IL,1998In order to define length, we have toengage an additional structure, namelythe norm ‖a‖ of a vector a. And in order todefine relative direction and orientation,and, in particular, orthogonality andcollinearity we have to define the scalarproduct ⟨a|b⟩ of two vectors a and b.
A linear vector space ⟨V,+, ·,−,0,1⟩ is a set V of elements called vectors,
here denoted by bold face symbols such as a,x,v,w, . . ., or, equivalently,
denoted by |a⟩, |x⟩, |v⟩, |w⟩, . . ., satisfying certain conditions (or, stated differ-
ently, axioms); among them, with respect to addition of vectors:
(i) commutativity,
(ii) associativity,
(iii) the uniqueness of the origin or null vector 0, as well as
(iv) the uniqueness of the negative vector;
with respect to multiplication of vectors with scalars associativity:
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 37
(v) the existence of a unit factor 1; and
(vi) distributivity with respect to scalar and vector additions; that is,
(α+β)x =αx+βx,
α(x+y) =αx+αy,(4.1)
with x,y ∈V and scalars α,β ∈ F, respectively.
Examples of vector spaces are:
(i) The set C of complex numbers: C can be interpreted as a complex
vector space by interpreting as vector addition and scalar multiplication
as the usual addition and multiplication of complex numbers, and with
0 as the null vector;
(ii) The set Cn , n ∈ N of n-tuples of complex numbers: Let x = (x1, . . . , xn)
and y = (y1, . . . , yn). Cn can be interpreted as a complex vector space by
interpreting the ordinary addition x+y = (x1 + y1, . . . , xn + yn) and the
multiplication αx = (αx1, . . . ,αxn) by a complex number α as vector ad-
dition and scalar multiplication, respectively; the null tuple 0 = (0, . . . ,0)
is the neutral element of vector addition;
(iii) The set P of all polynomials with complex coefficients in a variable t :
P can be interpreted as a complex vector space by interpreting the ordi-
nary addition of polynomials and the multiplication of a polynomial by
a complex number as vector addition and scalar multiplication, respec-
tively; the null polynomial is the neutral element of vector addition.
4.2 Linear independence
A set S= x1,x2, . . . ,xk ⊂V of vectors xi in a linear vector space is linearly
independent if xi 6= 0∀1 ≤ i ≤ k, and additionally, if either k = 1, or if no
vector in S can be written as a linear combination of other vectors in this
set S; that is, there are no scalars α j satisfying xi =∑1≤ j≤k, j 6=i α j x j .
Equivalently, if∑k
i=1αi xi = 0 implies αi = 0 for each i , then the set
S= x1,x2, . . . ,xk is linearly independent.
Note that a the vectors of a basis are linear independent and “maximal”
insofar as any inclusion of an additional vector results in a linearly depen-
dent set; that ist, this additional vector can be expressed in terms of a linear
combination of the existing basis vectors; see also Section 4.4 on page 40.
4.3 SubspaceFor proofs and additional information see§10 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
A nonempty subset M of a vector space is a subspace or, used synony-
muously, a linear manifold, if, along with every pair of vectors x and y
contained in M, every linear combination αx+βy is also contained in M.
38 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
If U and V are two subspaces of a vector space, then U+V is the sub-
space spanned by U and V; that is, it contains all vectors z = x+y, with x ∈Uand y ∈V.
M is the linear span
M= span(U,V) = span(x,y) = αx+βy |α,β ∈ F,x ∈U,y ∈V. (4.2)
A generalization to more than two vectors and more than two subspaces
is straightforward.
For every vector space V, the vector space containing only the null
vector, and the vector space V itself are subspaces of V.
4.3.1 Scalar or inner productFor proofs and additional information see§61 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
A scalar or inner product presents some form of measure of “distance” or
“apartness” of two vectors in a linear vector space. It should not be con-
fused with the bilinear functionals (introduced on page 46) that connect a
vector space with its dual vector space, although for real Euclidean vector
spaces these may coincide, and although the scalar product is also bilinear
in its arguments. It should also not be confused with the tensor product
introduced on page 52.
An inner product space is a vector space V, together with an inner
product; that is, with a map ⟨· | ·⟩ : V×V−→ F (usually F= C or F= R) that
satisfies the following three conditions (or, stated differently, axioms) for all
vectors and all scalars:
(i) Conjugate symmetry: ⟨x | y⟩ = ⟨y | x⟩. For real, Euclidean vector spaces, thisfunction is symmetric; that is ⟨x | y⟩ =⟨y | x⟩.(ii) Linearity in the second argument:
⟨x |αy+βz⟩ =α⟨x | y⟩+β⟨x | z⟩.
(iii) Positive-definiteness: ⟨x | x⟩ ≥ 0; with equality if and only if x = 0.
Note that from the first two properties, it follows that the inner product
is antilinear, or synonymously, conjugate-linear, in its first argument:
⟨αx+βy | z⟩ =α⟨x | z⟩+β⟨y | z⟩.
The norm of a vector x is defined by
‖x‖ =√⟨x | x⟩ (4.3)
One example is the dot product
⟨x|y⟩ =n∑
i=1xi yi (4.4)
of two vectors x = (x1, . . . , xn) and y = (y1, . . . , yn) in Cn , which, for real
Euclidean space, reduces to the well-known dot product ⟨x|y⟩ = x1 y1 +·· ·+xn yn = ‖x‖‖y‖cos∠(x,y).
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 39
It is mentioned without proof that the most general form of an inner
product in Cn is ⟨x|y⟩ = yAx†, where the symbol “†” stands for the conjugate
transpose (also denoted as Hermitian conjugate or Hermitian adjoint), and
A is a positive definite Hermitian matrix (all of its eigenvalues are positive).
Two nonzero vectors x,y ∈V, x,y 6= 0 are orthogonal, denoted by “x ⊥ y”
if their scalar product vanishes; that is, if
⟨x|y⟩ = 0. (4.5)
Let E be any set of vectors in an inner product space V. The symbol
E⊥ = x | ⟨x|y⟩ = 0,x ∈V,∀y ∈E
(4.6)
denotes the set of all vectors in V that are orthogonal to every vector in E.
Note that, regardless of whether or not E is a subspace, E⊥ is a sub- See page 37 for a definition of subspace.
space. Furthermore, E is contained in (E⊥)⊥ = E⊥⊥. In case E is a sub-
space, we call E⊥ the orthogonal complement of E.
The following projection theorem is mentioned without proof. If M is
any subspace of a finite-dimensional inner product space V, then V is the
direct sum of M and M⊥; that is, M⊥⊥ =M.
For the sake of an example, suppose V = R2, and take E to be the set
of all vectors spanned by the vector (1,0); then E⊥ is the set of all vectors
spanned by (0,1).
4.3.2 Hilbert space
A (quantum mechanical) Hilbert space is a linear vector space V over the
field C of complex numbers equipped with vector addition, scalar multi-
plication, and some scalar product. Furthermore, closure is an additional
requirement, but nobody has made operational sense of that so far: If
xn ∈V, n = 1,2, . . ., and if limn,m→∞(xn −xm ,xn −xm) = 0, then there exists
an x ∈V with limn→∞(xn −x,xn −x) = 0.
Infinite dimensional vector spaces and continuous spectra are non-
trivial extensions of the finite dimensional Hilbert space treatment. As a
heuristic rule – which is not always correct – it might be stated that the
sums become integrals, and the Kronecker delta function δi j defined by
δi j =0 for i 6= j ,
1 for i = j .(4.7)
becomes the Dirac delta function δ(x − y), which is a generalized function
in the continuous variables x, y . In the Dirac bra-ket notation, unity is
given by 1 = ∫ +∞−∞ |x⟩⟨x|d x. For a careful treatment, see, for instance, the
books by Reed and Simon 5. 5 Michael Reed and Barry Simon. Methodsof Mathematical Physics I: Functional Anal-ysis. Academic Press, New York, 1972; andMichael Reed and Barry Simon. Methods ofMathematical Physics II: Fourier Analysis,Self-Adjointness. Academic Press, NewYork, 1975
40 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
4.4 Basis
For proofs and additional information see§7 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
We shall use bases of vector spaces to formally represent vectors (elements)
therein.
A (linear) basis [or a coordinate system, or a frame (of reference)] is a set
B of linearly independent vectors such that every vector in V is a linear
combination of the vectors in the basis; hence B spans V.
What particular basis should one choose? A priori no basis is privileged
over the other. Yet, in view of certain (mutual) properties of elements of
some bases (such as orthogonality or orthonormality) we shall prefer
(s)ome over others.
Note that a vector is some directed entity with a particular length, ori-
ented in some (vector) “space.” It is “laid out there” in front of our eyes, as
it is: some directed entity. A priori, this space, in its most primitive form,
is not equipped with a basis, or synonymuously, frame of reference, or ref-
erence frame. Insofar it is not yet coordinatized. In order to formalize the
notion of a vector, we have to code this vector by “coordinates” or “com-
ponents” which are the coeffitients with respect to a (de)composition into
basis elements. Therefore, just as for numbers (e.g., by different numeral
bases, or by prime decomposition), there exist many “competing” ways to
code a vector.
Some of these ways appear to be rather straightforward, such as, in
particular, the Cartesian basis, also synonymuosly called the standard
basis. It is, however, not in any way a priori “evident” or “necessary” what
should be specified to be “the Cartesian basis.” Actually, specification of a
“Cartesian basis” seems to be mainly motivated by physical inertial motion
– and thus identified with some inertial frame of reference – “without any
friction and forces,” resulting in a “straight line motion at constant speed.”
(This sentence is cyclic, because heuristically any such absence of “friction
and force” can only be operationalized by testing if the motion is a “straight
line motion at constant speed.”) If we grant that in this way straight lines
can be defined, then Cartesian bases in Euclidean vector spaces can be
characterized by orthogonal (orthogonality is defined via vanishing scalar
products between nonzero vectors) straight lines spanning the entire
space. In this way, we arrive, say for a planar situation, at the coordinates
characteried by some basis (0,1), (1,0), where, for instance, the basis
vector “(1,0)” literally and physically means “a unit arrow pointing in some
particular, specified direction.”
Alas, if we would prefer, say, cyclic motion in the plane, we might want
to call a frame based on the polar coordinates r and θ “Cartesian,” result-
ing in some “Cartesian basis” (0,1), (1,0); but this “Cartesian basis” would
be very different from the Cartesian basis mentioned earlier, as “(1,0)”
would refer to some specific unit radius, and “(0,1)” would refer to some
specific unit angle (with respect to a specific zero angle). In terms of the
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 41
“straight” coordinates (with respect to “the usual Cartesian basis”) x, y ,
the polar coordinates are r =√
x2 + y2 and θ = tan−1(y/x). We obtain the
original “straight” coordinates (with respect to “the usual Cartesian basis”)
back if we take x = r cosθ and y = r sinθ.
Other bases than the “Cartesian” one may be less suggestive at first; alas
it may be “economical” or pragmatical to use them; mostly to cope with,
and adapt to, the symmetry of a physical configuration: if the physical sit-
uation at hand is, for instance, rotationally invariant, we might want to use
rotationally invariant bases – such as, for instance, polar coordinares in two
dimensions, or spherical coordinates in three dimensions – to represent a
vector, or, more generally, to code any given representation of a physical
entity (e.g., tensors, operators) by such bases.
4.5 DimensionFor proofs and additional information see§8 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
The dimension of V is the number of elements in B.
All bases B of V contain the same number of elements.
A vector space is finite dimensional if its bases are finite; that is, its bases
contain a finite number of elements.
In quantum physics, the dimension of a quantized system is associated
with the number of mutually exclusive measurement outcomes. For a spin
state measurement of an electron along a particular direction, as well as
for a measurement of the linear polarization of a photon in a particular
direction, the dimension is two, since both measurements may yield two
distinct outcomes which we can interpret as vectors in two-dimensional
Hilbert space, which, in Dirac’s bra-ket notation 6, can be written as |↑⟩ and 6 Paul A. M. Dirac. The Principles ofQuantum Mechanics. Oxford UniversityPress, Oxford, 1930|↓⟩, or | +⟩ and | −⟩, or | H⟩ and | V ⟩, or | 0⟩ and | 1⟩, or | ⟩ and | ⟩,
respectively.
4.6 CoordinatesFor proofs and additional information see§46 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
The coordinates of a vector with respect to some basis represent the coding
of that vector in that particular basis. It is important to realize that, as
bases change, so do coordinates. Indeed, the changes in coordinates have
to “compensate” for the bases change, because the same coordinates in
a different basis would render an altogether different vector. Figure 4.1
presents some geometrical demonstration of these thoughts, for your
contemplation.
Elementary high school tutorials often condition students into believing
that the components of the vector “is” the vector, rather then emphasizing
that these components represent or encode the vector with respect to some
(mostly implicitly assumed) basis. A similar situation occurs in many
introductions to quantum theory, where the span (i.e., the onedimensional
linear subspace spanned by that vector) y | y = αx,α ∈ C, or, equivalently,
42 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
x
x
(a) (b)
6
-e1
e2
x1
x2
0
x
1
x1′
x2′
e′1
e′2
0
x
(c) (d)
Figure 4.1: Coordinazation of vectors: (a)some primitive vector; (b) some primitivevectors, laid out in some space, denoted bydotted lines (c) vector coordinates x1 andx2 of the vector x = (x1, x2) = x1e1 + x2e2in a standard basis; (d) vector coordinatesx′1 and x′2 of the vector x = (x′1, x′2) =x′1e′1 +x′2e′2 in some nonorthogonal basis.
for orthogonal projections, the projector (i.e., the projection operator; see
also page 55) Ex = xT ⊗x corresponding to a unit (of length 1) vector x often
is identified with that vector. In many instances, this is a great help and,
if administered properly, is consistent and fine (at least for all practical
purposes).
The standard (Cartesian) basis in n-dimensional complex space Cn
is the set of (usually “straight”) vectors xi , i = 1, . . . ,n, represented by n-
tuples, defined by the condition that the i ’th coordinate of the j ’th basis
vector e j is given by δi j , where δi j is the Kronecker delta function
δi j =0 for i 6= j ,
1 for i = j .(4.8)
Thus,
e1 = (1,0, . . . ,0),
e2 = (0,1, . . . ,0),
...
en = (0,0, . . . ,1).
(4.9)
In terms of these standard base vectors, every vector x can be written as
a linear combination
x =n∑
i=1xi ei = (x1, x2, . . . , xn), (4.10)
or, in “dot product notation,” that is, “column times row” and “row times
column;” the dot is usually omitted (the superscript “T ” stands for trans-
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 43
position),
x = (e1,e2, . . . ,en) · (x1, x2, . . . , xn)T = (e1,e2, . . . ,en)
x1
x2...
xn
, (4.11)
of the product of the coordinates xi with respect to that standard basis.
Here the equality sign “=” really means “coded with respect to that stan-
dard basis.”
In what follows, we shall often identify the column vectorx1
x2...
xn
containing the coordinates of the vector x with the vector x, but we always
need to keep in mind that the tuples of coordinates are defined only with
respect to a particular basis e1,e2, . . . ,en; otherwise these numbers lack
any meaning whatsoever.
Indeed, with respect to some arbitrary basis B = f1, . . . , fn of some n-
dimensional vector space V with the base vectors fi , 1 ≤ i ≤ n, every vector
x in V can be written as a unique linear combination
x =n∑
i=1xi fi = (x1, x2, . . . , xn) (4.12)
of the product of the coordinates xi with respect to the basis B.
The uniqueness of the coordinates is proven indirectly by reductio
ad absurdum: Suppose there is another decomposition x = ∑ni=1 yi fi =
(y1, y2, . . . , yn); then by subtraction, 0 = ∑ni=1(xi − yi )fi = (0,0, . . . ,0). Since
the basis vectors fi are linearly independent, this can only be valid if all
coefficients in the summation vanish; thus xi −yi = 0 for all 1 ≤ i ≤ n; hence
finally xi = yi for all 1 ≤ i ≤ n. This is in contradiction with our assumption
that the coordinates xi and yi (or at least some of them) are different.
Hence the only consistent alternative is the assumption that, with respect
to a given basis, the coordinates are uniquely determined.
A set B= a1, . . . ,an of vectors of the inner product space V is orthonor-
mal if, for all ai ∈B and a j ∈B, it follows that
⟨ai | a j ⟩ = δi j . (4.13)
Any such set is called complete if it is not a subset of any larger orthonor-
mal set of vectors of V. Any complete set is a basis. If, instead of Eq. (4.13),
⟨ai | a j ⟩ =αiδi j with nontero factors αi , the set is called orthogonal.
44 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
4.7 Finding orthogonal bases from nonorthogonal ones
A Gram-Schmidt process is a systematic method for orthonormalising a
set of vectors in a space equipped with a scalar product, or by a synonym
preferred in mathematics, inner product. The Gram-Schmidt process The scalar or inner product ⟨x|y⟩ of twovectors x and y is defined on page 38. InEuclidean space such as Rn , one oftenidentifies the “dot product” x ·y = x1 y1 +·· ·+ xn yn of two vectors x and y with theirscalar or inner product.
takes a finite, linearly independent set of base vectors and generates an
orthonormal basis that spans the same (sub)space as the original set.
The general method is to start out with the original basis, say,
x1,x2,x3, . . . ,xn, and generate a new orthogonal basis
y1,y2,y3, . . . ,yn by
y1 = x1,
y2 = x2 −Py1 (x2),
y3 = x3 −Py1 (x3)−Py2 (x3),
...
yn = xn −n−1∑i=1
Pyi (xn),
(4.14)
where
Py(x) = ⟨x|y⟩⟨y|y⟩y, and P⊥
y (x) = x− ⟨x|y⟩⟨y|y⟩y (4.15)
are the orthogonal projections of x onto y and y⊥, respectively (the latter
is mentioned for the sake of completeness and is not required here). Note
that these orthogonal projections are idempotent and mutually orthogo-
nal; that is,
P 2y (x) = Py(Py(x)) = ⟨x|y⟩
⟨y|y⟩⟨y|y⟩⟨y|y⟩y = Py(x),
(P⊥y )2(x) = P⊥
y (P⊥y (x)) = x− ⟨x|y⟩
⟨y|y⟩y−( ⟨x|y⟩⟨y|y⟩ −
⟨x|y⟩⟨y|y⟩⟨y|y⟩2
)y,= P⊥
y (x),
Py(P⊥y (x)) = P⊥
y (Py(x)) = ⟨x|y⟩⟨y|y⟩y− ⟨x|y⟩⟨y|y⟩
⟨y|y⟩2 y = 0.
(4.16)
For a more general discussion of projectors, see also page 55.
Subsequently, in order to obtain an orthonormal basis, one can divide
every basis vector by its length.
The idea of the proof is as follows (see also Greub 7, section 7.9). In 7 Werner Greub. Linear Algebra, volume 23of Graduate Texts in Mathematics. Springer,New York, Heidelberg, fourth edition, 1975
order to generate an orthogonal basis from a nonorthogonal one, the
first vector of the old basis is identified with the first vector of the new
basis; that is y1 = x1. Then, as depicted in Fig. 4.2, the second vector of
the new basis is obtained by taking the second vector of the old basis and
subtracting its projection on the first vector of the new basis.
-
*6
-x1 = y1Py1 (x2)
y2 = x2 −Py1 (x2) x2
Figure 4.2: Gram-Schmidt constructionfor two nonorthogonal vectors x1 and x2,yielding two orthogonal vectors y1 and y2.
More precisely, take the Ansatz
y2 = x2 +λy1, (4.17)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 45
thereby determining the arbitrary scalar λ such that y1 and y2 are orthogo-
nal; that is, ⟨y1|y2⟩ = 0. This yields
⟨y2|y1⟩ = ⟨x2|y1⟩+λ⟨y1|y1⟩ = 0, (4.18)
and thus, since y1 6= 0,
λ=−⟨x2|y1⟩⟨y1|y1⟩
. (4.19)
To obtain the third vector y3 of the new basis, take the Ansatz
y3 = x3 +µy1 +νy2, (4.20)
and require that it is orthogonal to the two previous orthogonal basis
vectors y1 and y2; that is ⟨y1|y3⟩ = ⟨y2|y3⟩ = 0. We already know that
⟨y1|y2⟩ = 0. Consider the scalar products of y1 and y2 with the Ansatz for
y3 in Eq. (4.20); that is,
⟨y3|y1⟩ = ⟨y3|x1⟩+µ⟨y1|y1⟩+ν⟨y2|y1⟩0 = ⟨y3|x1⟩+µ⟨y1|y1⟩+ν ·0,
(4.21)
and
⟨y3|y2⟩ = ⟨y3|x2⟩+µ⟨y1|y2⟩+ν⟨y2|y2⟩0 = ⟨y3|x2⟩+µ ·0+ν⟨y2|y2⟩.
(4.22)
As a result,
µ=−⟨x3|y1⟩⟨y1|y1⟩
, ν=−⟨x3|y2⟩⟨y2|y2⟩
. (4.23)
A generalization of this construction for all the other new base vectors
y3, . . . ,yn , and thus a proof by complete induction, proceeds by a general-
ized construction.
Consider, as an example, the standard Euclidean scalar product denoted
by “·” and the basis (0,1), (1,1). Then two orthogonal bases are obtained
obtained by taking
(i) either the basis vector (0,1) and
(1,1)− (1,1) · (0,1)
(0,1) · (0,1)(0,1) = (1,0),
(ii) or the basis vector (1,1) and
(0,1)− (0,1) · (1,1)
(1,1) · (1,1)(1,1) = 1
2(−1,1).
46 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
4.8 Dual spaceFor proofs and additional information see§13–15 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
Every vector space V has a corresponding dual vector space (or just dual
space) consisting of all linear functionals on V.
A linear functional on a vector space V is a scalar-valued linear function
y defined for every vector x ∈V, with the linear property that
y(α1x1 +α2x2) =α1y(x1)+α2y(x2). (4.24)
For example, let x = (x1, . . . , xn), and take y(x) = x1.
For another example, let again x = (x1, . . . , xn), and let α1, . . . ,αn ∈ C be
scalars; and take y(x) =α1x1 +·· ·+αn xn .
If we adopt a square bracket notation “[·, ·]” for the functional
y(x) = [x,y]. (4.25)
Note that the usual arithmetic operations of addition and multiplica-
tion, that is,
(ay+bz)(x) = ay(x)+bz(x), (4.26)
together with the “zero functional” (mapping every argument to zero)
induce a kind of linear vector space, the “vectors” being identified with the
linear functionals. This vector space will be called dual space.
As a result, this “bracket” functional is bilinear in its two arguments;
that is,
[α1x1 +α2x2,y] =α1[x1,y]+α2[x2,y], (4.27)
and
[x,α1y1 +α2y2] =α1[x,y1]+α2[x,y2]. (4.28)
The square bracket can be identified withthe scalar dot product [x,y] = ⟨x | y⟩ onlyfor Euclidean vector spaces Rn , since forcomplex spaces this would no longer bepositive definite. That is, for Euclideanvector spaces Rn the inner or scalarproduct is bilinear.
If V is an n-dimensional vector space, and if B = f1, . . . , fn is a basis
of V, and if α1, . . . ,αn is any set of n scalars, then there is a unique linear
functional y on V such that [fi ,y] =αi for all 0 ≤ i ≤ n.
A constructive proof of this theorem can be given as follows: Since every
x ∈ V can be written as a linear combination x = x1f1 + ·· · + xn fn of the
base vectors in B in a unique way; and since y is a (bi)linear functional, we
obtain
[x,y] = x1[f1,y]+·· ·+xn[fn ,y], (4.29)
and uniqueness follows. With [fi ,y] =αi for all 0 ≤ i ≤ n, the value of [x,y] is
determined by [x,y] = xiαi +·· ·+xnαn .
If we introduce a dual basis by requiring that [fi , f∗j ] = δi j (cf. Eq. 4.30
below), then the coefficients [fi ,y] = αi , 1 ≤ i ≤ n, can be interpreted as
the coordinates of the linear functional y with respect to the dual basis B∗,
such that y = (α1,α2, . . . ,αn)T .
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 47
4.8.1 Dual basis
We now can define a dual basis, or, used synonymuously, a reciprocal basis.
If V is an n-dimensional vector space, and if B= f1, . . . , fn is a basis of V,
then there is a unique dual basis B∗ = f∗1 , . . . , f∗n in the dual vector space
V∗ with the property that
[fi , f∗j ] = δi j , (4.30)
where δi j is the Kronecker delta function. More generally, if g is the metric
tensor, the dual basis is defined by
g (fi , f∗j ) = δi j . (4.31)
or, in a different notation in which f∗j = f j ,
g (fi , f j ) = δ ji . (4.32)
In terms of the inner product, the representation of the metric g as out-
lined and characterized on page 95 with respect to a particular basis.
B = f1, . . . , fn may be defined by gi j = g (fi , f j ) = ⟨fi | f j ⟩. Note however,
that the coordinates gi j of the metric g need not necessarily be positive
definite. For example, special relativity uses the “pseudo-Euclidean” metric
g = diag(+1,+1,+1,−1) (or just g = diag(+,+,+,−)), where “diag” stands for
the diagonal matrix with the arguments in the diagonal. The metric tensor gi j represents a bilin-
ear functional g (x,y) = xi y j gi j that issymmetric; that is, g (x,y) = g (x,y) and non-degenerate; that is, for any nonzero vectorx ∈V, x 6= 0, there is some vector y ∈V, sothat g (x,y) 6= 0. g also satisfies the triangleinequality ||x−z|| ≤ ||x−y||+ ||y−z||.
The dual space V∗ is n-dimensional.
In a real Euclidean vector space Rn with the dot product as the scalar
product, the dual basis of an orthogonal basis is also orthogonal, and con-
tains vectors with the same directions, although with reciprocal length
(thereby exlaining the wording “recirocal basis”). Moreover, for an or-
thonormal basis, the basis vectors are uniquely identifiable by ei −→ ei∗ =
eiT . This identification can only be made for orthonomal bases; it is not
true for non-orthonormnal bases.
For the sake of a proof by reductio ad absurdum, suppose there exist a
vector ei∗ in the dual basis B∗ which is not in the “original” orthogonal
basis B; that is, [ei∗,ej] = δi j for all ej ∈B. But since B is supposed to span
the corresponding vector space V, ei∗ has to be contained in B∗.
Moreover, because for a real Euclidean vector space Rn the dot product
is identified with the scalar product, the two products [·, ·] = ⟨· | ·⟩ coincide,
ei associated with an orthogonal basis B has to be collinear – for normal-
ized basis vectors even identical – to exactly one element of B∗.
For nonorthogonal bases, take the counterexample explicitly mentioned
at page 49.
How can one determine the dual basis from a given, not necessarily
orthogonal, basis? Suppose for the rest of this section that the metric is
identical to the usual “dot product.” The tuples of row vectors of the basis
48 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
B= f1, . . . , fn can be arranged into a matrix
B=
f1
f2...
fn
=
f1,1 · · · f1,n
f2,1 · · · f2,n...
......
fn,1 · · · fn,n
. (4.33)
Then take the inverse matrix B−1, and interpret the column vectors of
B∗ =B−1
= (f∗1 , · · · , f∗n
)
=
f∗1,1 · · · f∗n,1
f∗1,2 · · · f∗n,2...
......
f∗1,n · · · f∗n,n
(4.34)
as the tuples of elements of the dual basis of B∗.
For orthogonal but not orthonormal bases, the term reciprocal basis can
be easily explained from the fact that the norm (or length) of each vector in
the reciprocal basis is just the inverse of the length of the original vector.
For a direct proof, consider B ·B−1 = In .
(i) For example, if
B= e1,e2, . . . ,en = (1,0, . . . ,0), (0,1, . . . ,0), . . . , (0,0, . . . ,1)
is the standard basis in n-dimensional vector space containing unit
vectors of norm (or length) one, then (the superscript “T ” indicates
transposition)
B∗ = e∗1 ,e∗2 , . . . ,e∗n
= (1,0, . . . ,0)T , (0,1, . . . ,0)T , . . . , (0,0, . . . ,1)T
=
1
0...
0
,
0
1...
0
, . . . ,
0
0...
1
(4.35)
has elements with identical components, but those tuples are the trans-
posed tuples.
(ii) If
X= α1e1,α2e2, . . . ,αn en = (α1,0, . . . ,0), (0,α2, . . . ,0), . . . , (0,0, . . . ,αn),
α1,α2, . . . ,αn ∈ R, is a “dilated” basis in n-dimensional vector space
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 49
containing vectors of norm (or length) αi , then
X∗ = 1
α1e∗1 ,
1
α2e∗2 , . . . ,
1
αne∗n
=
(1
α1,0, . . . ,0)T , (0,
1
α2, . . . ,0)T , . . . , (0,0, . . . ,
1
αn)T
=
1
α1
1
0...
0
,1
α2
0
1...
0
, . . . ,1
αn
0
0...
1
(4.36)
has elements with identical components of inverse length 1αi
, and again
those tuples are the transposed tuples.
(iii) Consider the nonorthogonal basis B = (1,2), (3,4). The associated
row matrix is
B=(
1 2
3 4
).
The inverse matrix is
B−1 =(−2 1
32 − 1
2
),
and the associated dual basis is obtained from the columns of B−1 by
B∗ =(
−232
),
(1
− 12
)=
1
2
(−4
3
),
1
2
(2
−1
). (4.37)
4.8.2 Dual coordinates
With respect to a given basis, the components of a vector are often written
as tuples of ordered (“xi is written before xi+1” – not “xi < xi+1”) scalars as
column vectors
x =
x1
x2...
xn
, (4.38)
whereas the components of vectors in dual spaces are often written in
terms of tuples of ordered scalars as row vectors
x∗ = (x∗1 , x∗
2 , . . . , x∗n ). (4.39)
The coordinates (x1, x2, . . . , xn)T =
x1
x2...
xn
are called covariant, whereas the
coordinates (x∗1 , x∗
2 , . . . , x∗n ) are called contravariant, . Alternatively, one can
50 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
denote covariant coordinates by subscripts, and contravariant coordinates
by superscripts; that is (see also Havlicek 8, Section 11.4), 8 Hans Havlicek. Lineare Algebra fürTechnische Mathematiker. HeldermannVerlag, Lemgo, second edition, 2008
xi =
x1
x2...
xn
and xi = (x∗1 , x∗
2 , . . . , x∗n ). (4.40)
Note again that the covariant and contravariant components xi and xi are
not absolute, but always defined with respect to a particular (dual) basis.
The Einstein summation convention requires that, when an index vari-
able appears twice in a single term, one has to sum over all of the possible
index values. This saves us from drawing the sum sign “∑
i ” for the index i ;
for instance xi yi =∑i xi yi .
In the particular context of covariant and contravariant components –
made necessary by nonorthogonal bases whose associated dual bases are
not identical – the summation always is between some superscript and
some subscript; e.g., xi y i .
Note again that for orthonormal basis, xi = xi .
4.8.3 Representation of a functional by inner productFor proofs and additional information see§67 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
The following representation theorem, often called Riesz representation
theorem, is about the connection between any functional in a vector space
and its inner product; it is stated without proof: To any linear functional z
on a finite-dimensional inner product space V there corresponds a unique
vector y ∈V, such that
z(x) = [x,z] = ⟨x | y⟩ (4.41)
for all x ∈V.
Note that in real or complex vector space Rn or Cn , and with the dot
product, y† = z.
In quantum mechanics, this representation of a functional by the inner
product suggests a “natural” duality between propositions and states –
that is, between (i) dichotomic (yes/no, or 1/0) observables represented
by projectors Ex = |x⟩⟨x| and their associated linear subspaces spanned
by unit vectors |x⟩ on the one hand, and (ii) pure states, which are also
represented by projectors ρψ = |ψ⟩⟨ψ| and their associated subspaces
spanned by unit vectors |ψ⟩ on the other hand – via the scalar product
“⟨·|·⟩.” In particular 9, 9 Jan Hamhalter. Quantum Measure Theory.Fundamental Theories of Physics, Vol. 134.Kluwer Academic Publishers, Dordrecht,Boston, London, 2003. ISBN 1-4020-1714-6
ψ(x) ≡ [x,ψ] = ⟨x |ψ⟩ (4.42)
represents the probability amplitude. By the Born rule for pure states, the
absolute square |⟨x |ψ⟩|2 of this probability amplitude is identified with the
probability of the occurrence of the proposition Ex, given the state |ψ⟩.More general, due to linearity and the spectral theorem (cf. Section
4.27.1 on page 77), the statistical expectation for a Hermitean (normal)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 51
operator A = ∑ki=0λi Ei and a quantized system prepared in pure state |ψ⟩
is given by the Born rule
⟨A⟩ψ = Tr(ρψA)
= Trk∑
i=0λiρψ(Ei )
= Trk∑
i=0λi (|ψ⟩⟨ψ|)(|xi ⟩⟨xi |)
= Trk∑
i=0λi |ψ⟩⟨ψ|xi ⟩⟨xi |
=k∑
j=0⟨x j |
(k∑
i=0λi |ψ⟩⟨ψ|xi ⟩⟨xi |
)|x j ⟩
=k∑
j=0
k∑i=0
λi ⟨x j |ψ⟩⟨ψ|xi ⟩⟨xi |x j ⟩︸ ︷︷ ︸δi j
=k∑
i=0λi ⟨xi |ψ⟩⟨ψ|xi ⟩
=k∑
i=0λi |⟨xi |ψ⟩|2,
(4.43)
where Tr stands for the trace (cf. Section 4.17 on page 65).
4.8.4 Double dual space
In the following, we strictly limit the discussion to finite dimensional vector
spaces.
Because to every vector space V there exists a dual vector space V∗
“spanned” by all linear functionals on V, there exists also a dual vector
space (V∗)∗ = V∗∗ to the dual vector space V∗ “spanned” by all linear
functionals on V∗. We state without proof that V∗∗ is closely related to, https://www.dpmms.cam.ac.uk/ wtg10/meta.doubledual.html
and can be canonically identified with V via the canonical bijection
V→V∗∗ : x 7→ ⟨·|x⟩, with
⟨·|x⟩ :V∗ →R or C : a∗ 7→ ⟨a∗|x⟩;(4.44)
indeed, more generally,
V≡V∗∗,
V∗ ≡V∗∗∗,
V∗∗ ≡V∗∗∗∗ ≡V,
V∗∗∗ ≡V∗∗∗∗∗ ≡V∗,
...
(4.45)
52 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
4.9 Tensor productFor proofs and additional information see§24 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.9.1 Definition
For the moment, suffice it to say that the tensor product V⊗U of two linear
vector spaces V and U should be such that, to every x ∈V and every y ∈ U
there corresponds a tensor product z = x⊗ y ∈V⊗U which is bilinear in
both factors.
If A= f1, . . . , fn and B= g1, . . . ,gm are bases of n- and m- dimensional
vector spaces V and U, respectively, then the set Z of vectors zi j = fi ⊗g j
with i = 1, . . .n and j = 1, . . .m is a basis of the tensor product V⊗U.
A generalization to more than one factors is straightforward.
4.9.2 Representation
The tensor product z = x⊗y has three equivalent notations or representa-
tions:
(i) as the scalar coordinates xi y j with respect to the basis in which the
vectors x and y have been defined and coded;
(ii) as the quasi-matrix zi j = xi y j , whose components zi j are defined with
respect to the basis in which the vectors x and y have been defined and
coded;
(iii) as a quasi-vector or “flattened matrix” defined by the Kronecker
product z = (x1y, x2y, . . . , xn y) = (x1 y1, x1 y2, . . . , xn yn). Again, the scalar
coordinates xi y j are defined with respect to the basis in which the
vectors x and y have been defined and coded.
In all three cases, the pairs xi y j are properly represented by distinct mathe-
matical entities.
Note, however, that this kind of quasi-matrix or quasi-vector represen-
tation can be misleading insofar as it (wrongly) suggests that all vectors are
accessible (representable) as quasi-vectors. For instance, take the arbitrary In quantum mechanics this amounts to thefact that not all pure two-particle states canbe written in terms of (tensor) products ofsingle-particle states; see also Section 1.5of
David N. Mermin. Quantum ComputerScience. Cambridge University Press,Cambridge, 2007. ISBN 9780521876582.URL http://people.ccmr.cornell.edu/
~mermin/qcomp/CS483.html
form of a (quasi-)vector in C4, which can be parameterized by
(α1,α2,α3,α4), with α1,α3,α3,α4 ∈C, (4.46)
and compare (4.46) with the general form of a tensor product of two quasi-
vectors in C2
(a1, a2)⊗ (b1,b2) ≡ (a1b1, a1b2, a2b1, a2b2), with a1, a2,b1,b2 ∈C. (4.47)
A comparison of the coordinates in (4.46) and (4.47) yields
α1 = a1b1,
α2 = a1b2,
α3 = a2b1,
α4 = a2b2.
(4.48)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 53
By taking the quotient of the two top and the two bottom equations and
equating these quotients, one obtains
α1
α2= b1
b2= α3
α4, and thus
α1α4 =α2α3,
(4.49)
which amounts to a condition for the four coordinates α1,α2,α3,α4 in
order for this four-dimensional vector to be decomposable into a tensor
product of two two-dimensional quasi-vectors. In quantum mechanics,
pure states which are not decomposable into a single tensor product are
called entangled.
4.10 Linear transformationFor proofs and additional information see§32-34 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.10.1 Definition
A linear transformation, or, used synonymuosly, a linear operator, A on a
vector space V is a correspondence that assigns every vector x ∈V a vector
Ax ∈V, in a linear way; such that
A(αx+βy) =αA(x)+βA(y) =αAx+βAy, (4.50)
identically for all vectors x,y ∈V and all scalars α,β.
4.10.2 Operations
The sum S = A+B of two linear transformations A and B is defined by
Sx =Ax+Bx for every x ∈V.
The product P = AB of two linear transformations A and B is defined by
Px =A(Bx) for every x ∈V.
The notation AnAm = An+m and (An)m = Anm , with A1 = A and A0 = 1turns out to be useful.
With the exception of commutativity, all formal algebraic properties of
numerical addition and multiplication, are valid for transformations; that
is A0 = 0A = 0, A1 = 1A = A, A(B+C) = AB+AC, (A+B)C = AC+BC,
and A(BC) = (AB)C. In matrix notation, 1 = 1, and the entries of 0 are 0
everywhere.
The inverse operator A−1 of A is defined by AA−1 =A−1A= I.The commutator of two matrices A and B is defined by
[A,B] =AB−BA. (4.51)
The commutator should not be confusedwith the bilinear funtional introduced fordual spaces.
The polynomial can be directly adopted from ordinary arithmetic; that
is, any finite polynomial p of degree n of an operator (transformation) Acan be written as
p(A) =α01+α1A1 +α2A2 +·· ·+αnAn =n∑
i=0αi Ai . (4.52)
54 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
The Baker-Hausdorff formula
e iABe−iA = B + i [A,B]+ i 2
2![A, [A,B]]+·· · (4.53)
for two arbitrary noncommutative linear operators A and B is mentioned
without proof (cf. Messiah, Quantum Mechanics, Vol. 1 10). 10 A. Messiah. Quantum Mechanics,volume I. North-Holland, Amsterdam,1962
If [A,B] commutes with A and B, then
eAeB = eA+B+ 12 [A,B]. (4.54)
If A commutes with B, then
eAeB = eA+B. (4.55)
4.10.3 Linear transformations as matrices
Due to linearity, there is a close connection between a matrix defined by an
n-by-n square array
A≡ ⟨i |A| j ⟩ = ai j ≡ ⟨fi |A|f j ⟩ ≡ ⟨fi |Af j ⟩ ≡ ai j ≡
α11 α12 · · · α1n
α21 α22 · · · α2n...
... · · · ...
αn1 αn2 · · · αnn
(4.56)
containing n2 entries, also called matrix coefficients or matrix coordinates
αi j , and a linear transformation A, encoded with respect to a particular ba-
sis B = f1, f2, . . . , fn. This can be well understood in terms of transforma-
tions of the basis elements, as every vector is a unique linear combination
of these basis elements; more explicitly, see the Ansatz yi =Axi in Eq. (4.71)
below.
Let V be an n-dimensional vector space; let B = f1, f2, . . . , fn be any
basis of V, and let A be a linear transformation on V. Because every vector
is a linear combination of the basis vectors fi , it is possible to define some
matrix coefficients or coordinates αi j such that
Af j =∑
iαi j fi (4.57)
for all j = 1, . . . ,n. Again, note that this definition of a transformation
matrix is “tied to” a basis.
In terms of this matrix notation, it is quite easy to present an example
for which the commutator [A,B] does not vanish; that is A and B do not
commute.
Take, for the sake of an example, the Pauli spin matrices which are
proportional to the angular momentum operators along the x, y , z-axis 11: 11 Leonard I. Schiff. Quantum Mechanics.McGraw-Hill, New York, 1955
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 55
σ1 =σx =(
0 1
1 0
),
σ2 =σy =(
0 −i
i 0
),
σ3 =σz =(
1 0
0 −1
).
(4.58)
Together with unity, i.e., I2 = diag(1,1), they form a complete basis of all
(4×4) matrices. Now take, for instance, the commutator
[σ1,σ3] =σ1σ3 −σ3σ1
=(
0 1
1 0
)(1 0
0 −1
)−
(1 0
0 −1
)(0 1
1 0
)
= 2
(0 −1
1 0
)6=
(0 0
0 0
).
(4.59)
4.11 Direct sumFor proofs and additional information see§18 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
Let U and V be vector spaces (over the same field, say C). Their direct sum
W = U⊕V consist of all ordered pairs (x,y), with x ∈ U in y ∈V, and with
the linear operations defined by
(αx1 +βx2,αy1 +βy2) =α(x1,y1)+β(x2,y2). (4.60)
We state without proof that, if U and V are subspaces of a vector space
W, then the following three conditions are equivalent:
(i) W=U⊕V;
(ii) U⋂V = 0 and U+V = W (i.e., U and V are complements of each
other);
(iii) every vector z ∈W can be written as z = x+y, with x ∈U and y ∈V, in
one and only one way.
4.12 Projector or ProjectionFor proofs and additional information see§41 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.12.1 Definition
If V is the direct sum of some subspaces M and N so that every z ∈ V
can be uniquely written in the form z = x+y, with x ∈M and with y ∈N,
then the projector, or, used synonymuosly, projection on M along N is the
transformation E defined by Ez = x. Conversely, Fz = y is the projector on
N along M.
A (nonzero) linear transformation E is a projector if and only if it is
idempotent; that is, EE=E 6= 0.
56 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For a proof note that, if E is the projector on M along N, and if z = x+y,
with x ∈M and with y ∈N, the the decomposition of x yields x+0, so that
E2z = EEz = Ex = x = Ez. The converse – idempotence “EE = E” implies
that E is a projector – is more difficult to prove. For this proof we refer to
the literature; e.g., Halmos 12. 12 Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
We also mention without proof that a linear transformation E is a pro-
jector if and only if 1−E is a projector. Note that (1−E)2 = 1−E−E+E2 =1−E; furthermore, E(1−E) = (1−E)E=E−E2 = 0.
Furthermore, if E is the projector on M along N, then 1−E is the projec-
tor on N along M.
4.12.2 Construction of projectors from unit vectors
How can we construct projectors from unit vectors, or systems of orthog-
onal projectors from some vector in some orthonormal basis with the
standard dot product?
Let x be the coordinates of a unit vector; that is ‖x‖ = 1. Transposition
is indicated by the superscript “T ” in real vector space. In complex vector
space the transposition has to be substituted for the conjugate transpose
(also denoted as Hermitian conjugate or Hermitian adjoint), “†,” stand-
ing for transposition and complex conjugation of the coordinates. More
explicitly,
(x1, . . . , xn)T =
x1...
xn
, (4.61)
x1...
xn
T
= (x1, . . . , xn), (4.62)
and
(x1, . . . , xn)† =
x1...
xn
, (4.63)
x1...
xn
†
= (x1, . . . , xn). (4.64)
Note that, just as (xT )T = x, (4.65)
so is (x†
)† = x. (4.66)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 57
In real vector space, the dyadic or tensor product (also in Dirac’s bra and
ket notation),
Ex = x⊗xT = |x⟩⟨x|
=
x1
x2...
xn
(x1, x2, . . . , xn)
=
x1(x1, x2, . . . , xn)
x2(x1, x2, . . . , xn)...
xn(x1, x2, . . . , xn)
=
x1x1 x1x2 · · · x1xn
x2x1 x2x2 · · · x2xn...
......
...
xn x1 xn x2 · · · xn xn
(4.67)
is the projector associated with x.
If the vector x is not normalized, then the associated projector is
Ex = x⊗xT
⟨x | x⟩ = |x⟩⟨x|⟨x | x⟩ (4.68)
This construction is related to Px on page 44 by Px(y) =Exy.
For a proof, consider only normalized vectors x, and let Ex = x⊗xT , then
ExEx = (|x⟩⟨x|)(|x⟩⟨x|) = |x⟩⟨x|x⟩⟨x| = |x⟩ ·1 · ⟨x|0|x⟩⟨x| =Ex.
More explicitly, by writing out the coordinate tuples, the equivalent proof is
ExEx = (x⊗xT ) · (x⊗xT )
=
x1
x2...
xn
(x1, x2, . . . , xn)
x1
x2...
xn
(x1, x2, . . . , xn)
=
x1
x2...
xn
(x1, x2, . . . , xn)
x1
x2...
xn
(x1, x2, . . . , xn)
=
x1
x2...
xn
·1 · (x1, x2, . . . , xn) =
x1
x2...
xn
(x1, x2, . . . , xn) =Ex.
(4.69)
58 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Fo two examples, let x = (1,0)T and y = (1,−1)T ; then
Ex =(
1
0
)(1,0) =
(1(1,0)
0(1,0)
)=
(1 0
0 0
),
and
Ey = 1
2
(1
−1
)(1,−1) = 1
2
(1(1,−1)
−1(1,−1)
)= 1
2
(1 −1
−1 1
).
4.13 Change of basisFor proofs and additional information see§46 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
Let V be an n-dimensional vector space and let X = e1, . . . ,en and Y =f1, . . . , fn be two bases of V.
Take an arbitrary vector z ∈V. In terms of the two bases X and Y, z can
be written as
z =n∑
i=1xi ei =
n∑i=1
y i fi , (4.70)
where xi and y i stand for the coordinates of the vector z with respect to the
bases X and Y, respectively.
The following questions arise:
(i) What is the relation between the “corresponding” basis vectors ei and
f j ?
(ii) What is the relation between the coordinates xi (with respect to the
basis X) and y j (with respect to the basis Y) of the vector z in Eq. (4.70)?
(iii) Suppose one fixes the coordinates, say, v1, . . . , vn, what is the relation
beween the vectors v =∑ni=1 v i ei and w =∑n
i=1 v i fi ?
As an Ansatz for answering question (i), recall that, just like any other
vector in V, the new basis vectors fi contained in the new basis Y can be
(uniquely) written as a linear combination (in quantum physics called lin-
ear superposition) of the basis vectors ei contained in the old basis X. This
can be defined via a linear transformation A between the corresponding
vectors of the bases X and Y by
fi = (Ae)i , (4.71)
for all i = 1, . . . ,n. More specifically, let ai j be the matrix of the linear trans-
formation A in the basis X= e1, . . . ,en, and let us rewrite (4.71) as a matrix
equation
fi =n∑
j=1a j i e j . (4.72)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 59
If A stands for the matrix whose components (with respect to X) are ai j ,
and AT stands for the transpose of A whose components (with respect to
X) are a j i , then f1
f2...
fn
=AT
e1
e2...
en
. (4.73)
That is, very explicitly,
f1 = (Ae)1 = a11e1 +a21e2 +·· ·+an1en =n∑
i=1ai 1ei ,
f2 = (Ae)2 = a12e1 +a22e2 +·· ·+an2en =n∑
i=1ai 2ei ,
...
fn = (Ae)n = a1n e1 +a2n e2 +·· ·+ann en =n∑
i=1ai n ei .
(4.74)
This implies
n∑i=1
v i fi =n∑
i=1v i (Ae)i =AT
(n∑
i=1v i ei
). (4.75)
• Note that the n equalities (4.74) really represent n2 linear equations for
the n2 unknowns ai j , 1 ≤ i , j ≤ n, since every pair of basis vectors fi ,ei ,
1 ≤ i ≤ n has n components or coefficients.
• If one knows how the basis vectors e1, . . . ,en of X transform, then one
knows (by linearity) how all other vectors v = ∑ni=1 v i ei (represented in
this basis) transform; namely A(v) =∑ni=1 v i (Ae)i .
• Finally note that, if X is an orthonormal basis, then the basis transfor-
mation has a diagonal form
A=n∑
i=1f†
i ei =n∑
i=1|fi ⟩⟨ei | (4.76)
because all the off-diagonal components ai j , i 6= j of A explicitely writ-
ten down in Eqs.(4.74) vanish. This can be easily checked by applying Ato the elements ei of the basis X. See also Section 4.23.3 on page 70 for a
representation of unitary transformations in terms of basis changes. In
quantum mechanics, the temporal evolution is represented by nothing
but a change of orthonormal bases in Hilbert space.
Having settled question (i) by the Ansatz (4.71), we turn to question (ii)
next. Since
z =n∑
j=1y j f j =
n∑j=1
y j (Ae) j =n∑
j=1y j
n∑i=1
ai j ei =n∑
i=1
(n∑
j=1ai j y j
)ei ;
60 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
we obtain by comparison of the coefficients in Eq. (4.70),
xi =n∑
j=1ai j y j . (4.77)
That is, in terms of the “old” coordinates xi , the “new” coordinates aren∑
i=1(a−1) j ′i xi =
n∑i=1
(a−1) j ′in∑
j=1ai j y j =
n∑i=1
n∑j=1
(a−1) j ′i ai j y j =n∑
j=1δ j ′ j y j = y j ′ .
(4.78)
If we prefer to represent the vector coordinates of x and y as n-tuples,
then Eqs. (4.77) and (4.78) have an interpretation as matrix multiplication;
that is,
x =Ay, and
y =A−1x. (4.79)
Finally, let us answer question (iii) by substituting the Ansatz fi = Aei
defined in Eq. (4.71), while considering
w =n∑
i=1v i fi =
n∑i=1
v i Aei =An∑
i=1v i ei =A
(n∑
i=1v i ei
)=Av. (4.80)
For the sake of an example,
-
6
@@@
@@@I
x1 = (1,0)T
x2 = (0,1)T
y2 = 1p2
(−1,1)T y1 = 1p2
(1,1)T
.
...................
.................
................
..............
............
.....................................................................
.............
....I ϕ= π
4
ϕ= π4
Figure 4.3: Basis change by rotation ofϕ= π
4 around the origin.
1. consider a change of basis in the plane R2 by rotation of an angle ϕ= π4
around the origin, depicted in Fig. 4.3. According to Eq. (4.71), we have
f1 = a11e1 +a21e2,
f2 = a12e1 +a22e2,(4.81)
which amounts to four linear equations in the four unknowns a11, a12, a21, a22.
By inserting the basis vectors x1,x2,y1,y2 one obtains for the rotation
matrix with respect to the basis X
1p2
(1,1
−1,1
)=
(a11 a21
a12 a22
)(1,0
0,1
), (4.82)
the first pair of equations yielding a11 = a21 = 1p2
, the second pair of
equations yielding a12 =− 1p2
and a22 = 1p2
. Thus,
A=(
a11 a12
a21 a22
)= 1p
2
(1 −1
1 1
). (4.83)
As both coordinate systems X = e1,e2 and Y = f1, f2 are orthogonal,
we might have just computed the diagonal form (4.76)
A= 1p2
[(1
1
)(1,0
)+
(−1
1
)(0,1
)]
= 1p2
[(1(1,0)
1(1,0)
)+
(−1(0,1)
1(0,1)
)]
= 1p2
[(1 0
1 0
)+
(0 −1
0 1
)]= 1p
2
(1 −1
1 1
).
(4.84)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 61
Likewise, the rotation matrix with respect to the basis Y is
A′ = 1p2
[(1
0
)(1,1
)+
(0
1
)(−1,1
)]= 1p
2
(1 1
−1 1
). (4.85)
2. By a similar calculation, taking into account the definition for the sine
and cosine functions, one obtains the transformation matrix A(ϕ) asso-
ciated with an arbitrary angle ϕ,
A=(
cosϕ −sinϕ
sinϕ cosϕ
). (4.86)
The coordinates transform as
A−1 =(
cosϕ sinϕ
−sinϕ cosϕ
). (4.87)
3. Consider the more general rotation depicted in Fig. 4.4. Again, by in-
serting the basis vectors e1,e2, f1, and f2, one obtains
1
2
(p3,1
1,p
3
)=
(a11 a21
a12 a22
)(1,0
0,1
), (4.88)
yielding a11 = a22 =p
32 , the second pair of equations yielding a12 = a21 =
12 .
-
6
x1 = (1,0)T
x2 = (0,1)T
y2 = 12 (1,
p3)T
y1 = 12 (p
3,1)Tϕ= π6
ϕ= π6
*
K...................
...............
.....................
j. .................. ....................
.....................
Figure 4.4: More general basis change byrotation.
Thus,
A=(
a b
b a
)= 1
2
(p3 1
1p
3
). (4.89)
The coordinates transform according to the inverse transformation,
which in this case can be represented by
A−1 = 1
a2 −b2
(a −b
−b a
)=
(p3 −1
−1p
3
). (4.90)
4.14 Mutually unbiased bases
Two orthonormal bases B = e1, . . . ,en and B′ = f1, . . . , fn are said to be
mutually unbiased if their scalar or inner products are
|⟨ei |f j ⟩|2 = 1
n(4.91)
for all 1 ≤ i , j ≤ n. Note without proof – that is, you do not have to be
concerned that you need to understand this from what has been said so far
– that “the elements of two or more mutually unbiased bases are mutually
maximally apart.”
In physics, one seeks maximal sets of orthogonal bases who are max-
imally apart 13 . Such maximal sets of bases are used in quantum infor- 13 W. K. Wootters and B. D. Fields. Optimalstate-determination by mutually unbi-ased measurements. Annals of Physics,191:363–381, 1989. D O I : 10.1016/0003-4916(89)90322-9. URL http://dx.doi.
org/10.1016/0003-4916(89)90322-9;and Thomas Durt, Berthold-Georg En-glert, Ingemar Bengtsson, and KarolZyczkowski. On mutually unbiasedbases. International Journal of Quan-tum Information, 8:535–640, 2010.D O I : 10.1142/S0219749910006502.URL http://dx.doi.org/10.1142/
S0219749910006502
62 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
mation theory to assure maximal performance of certain protocols used
in quantum cryptography, or for the production of quantum random se-
quences by beam splitters. They are essential for the practical exploitations
of quantum complementary properties and resources.
Schwinger presented an algorithm (see 14 for a proof) to construct a 14 J. Schwinger. Unitary operators bases.In Proceedings of the National Academy ofSciences (PNAS), volume 46, pages 570–579,1960. D O I : 10.1073/pnas.46.4.570. URLhttp://dx.doi.org/10.1073/pnas.46.
4.570
new mutually unbiased basis B from an existing orthogonal one. The
proof idea is to create a new basis “inbetween” the old basis vectors. by the
following construction steps:
(i) take the existing orthogonal basis and permute all of its elements by
“shift-permuting” its elements; that is, by changing the basis vectors
according to their enumeration i → i +1 for i = 1, . . . ,n −1, and n → 1; or
any other nontrivial (i.e., do not consider identity for any basis element)
permutation;
(ii) consider the (unitary) transformation (cf. Sections 4.13 and 4.23.3)
corresponding to the basis change from the old basis to the new, “per-
mutated” basis;
(iii) finally consider the (orthonormal) eigenvectors of this (unitary) trans-
formation associated with the basis change form the vectors of a new
bases B′. Together with B these two bases are mutually unbiased.
Consider, for example, the real plane R2, and the basis For a Mathematica(R) program, seehttp://tph.tuwien.ac.at/
∼svozil/publ/2012-schwinger.mB= e1,e2 ≡ (1,0), (0,1).
The shift-permutation [step (i)] brings B to a new, “shift-permuted” basis
S; that is,
e1,e2 7→S= f1 = e2, f1 = e1 ≡ (0,1), (1,0).
The (unitary) basis transformation [step (ii)] between B and S can be
constructed by a diagonal sum
U= f†1e1 + f†
2e2 = e†2e1 +e†
1e2
= |f1⟩⟨e1|+ |f2⟩⟨e2| = |e2⟩⟨e1|+ |e1⟩⟨e2|
≡(
0
1
)(1,0)+
(1
0
)(0,1)
≡(
0(1,0)
1(1,0)
)+
(1(0,1)
0(0,1)
)
≡(
0 0
1 0
)+
(0 1
0 0
)=
(0 1
1 0
).
(4.92)
The set of eigenvectors [step (iii)] of this (unitary) basis transformation U
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 63
forms a new basis
B′ = 1p2
(f1 −e1),1p2
(f2 +e2)
= 1p2
(|f1⟩− |e1⟩),1p2
(|f2⟩+ |e2⟩)
= 1p2
(|e2⟩− |e1⟩),1p2
(|e1⟩+ |e2⟩)
≡
1p2
(−1,1),1p2
(1,1)
.
(4.93)
For a proof of mutually unbiasedness, just form the four inner products of
one vector in B times one vector in B′, respectively.
In three-dimensional complex vector space C3, a similar construction
from the Cartesian standard basis B= e1,e2,e3 ≡ (1,0,0), (0,1,0), (0,0,1)
yields
B′ ≡ 1p3
(1,1,1),(1
2
[p3i −1
],
1
2
[−p3i −1
],1
),(
1
2
[−p3i −1
],
1
2
[p3i −1
],1
).
(4.94)
Nobody knows how to systematically derive and construct a complete or
maximal set of mutually unbiased bases; nor is it clear in general, that is,
for arbitrary dimensions, how many bases there are in such sets.
4.15 Rank
The (column or row) rank, ρ(A), or rk(A), of a linear transformation Ain an n-dimensional vector space V is the maximum number of linearly
independent (column or, equivalently, row) vectors of the associated n-by-
n square matrix A, represented by its entries ai j .
This definition can be generalized to arbitrary m-by-n matrices A,
represented by its entries ai j . Then, the row and column ranks of A are
identical; that is,
row rk(A) = column rk(A) = rk(A). (4.95)
For a proof, consider Mackiw’s argument 15. First we show that row rk(A) ≤ 15 George Mackiw. A note on the equalityof the column and row rank of a matrix.Mathematics Magazine, 68(4):pp. 285–286, 1995. ISSN 0025570X. URL http:
//www.jstor.org/stable/2690576
column rk(A) for any real (a generalization to complex vector space re-
quires some adjustments) m-by-n matrix A. Let the vectors e1,e2, . . . ,er
with ei ∈ Rn , 1 ≤ i ≤ r , be a basis spanning the row space of A; that is, all
vectors that can be obtained by a linear combination of the m row vectors(a11, a12, . . . , a1n)
(a21, a22, . . . , a2n)...
(am1, an2, . . . , amn)
64 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
of A can also be obtained as a linear combination of e1,e2, . . . ,er . Note that
r ≤ m.
Now form the column vectors AeTi for 1 ≤ i ≤ r , that is, AeT
1 , AeT2 , . . . , AeT
r
via the usual rules of matrix multiplication. Let us prove that these result-
ing column vectors AeTi are linearly independent.
Suppose they were not (proof by contradiction). Then, for some scalars
c1,c2, . . . ,cr ∈R,
c1 AeT1 + c2 AeT
2 + . . .+ cr AeTr = A
(c1eT
1 + c2eT2 + . . .+ cr eT
r
)= 0
without all ci ’s vanishing.
That is, v = c1eT1 + c2eT
2 + . . . + cr eTr , must be in the null space of A
defined by all vectors x with Ax = 0, and A(v) = 0 . (In this case the inner
(Euclidean) product of x with all the rows of A must vanish.) But since the
ei ’s form also a basis of the row vectors, vT is also some vector in the row
space of A. The linear independence of the basis elements e1,e2, . . . ,er of
the row space of A guarantees that all the coefficients ci have to vanish;
that is, c1 = c2 = ·· · = cr = 0.
At the same time, as for every vector x ∈ Rn , Ax is a linear combination
of the column vectors
a11
a21...
am1
,
a12
a22...
am2
, · · · ,
a1n
a2n...
amn
,
the r linear independent vectors AeT1 , AeT
2 , . . . , AeTr are all linear combi-
nations of the column vectors of A. Thus, they are in the column space of
A. Hence, r ≤ column rk(A). And, as r = row rk(A), we obtain row rk(A) ≤column rk(A).
By considering the transposed matrix AT , and by an analogous ar-
gument we obtain that row rk(AT ) ≤ column rk(AT ). But row rk(AT ) =column rk(A) and column rk(AT ) = row rk(A), and thus row rk(AT ) =column rk(A) ≤ column rk(AT ) = row rk(A). Finally, by considering both
estimates row rk(A) ≤ column rk(A) as well as column rk(A) ≤ row rk(A), we
obtain that row rk(A) = column rk(A).
4.16 Determinant
4.16.1 Definition
Suppose A = ai j is the n-by-n square matrix representation of a linear
transformation A in an n-dimensional vector space V. We shall define its
determinant recursively.
First, a minor Mi j of an n-by-n square matrix A is defined to be the
determinant of the (n −1)× (n −1) submatrix that remains after the entire
i th row and j th column have been deleted from A.
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 65
A cofactor Ai j of an n-by-n square matrix A is defined in terms of its
associated minor by
Ai j = (−1)i+ j Mi j . (4.96)
The determinant of a square matrix A, denoted by detA or |A|, is a scalar
rekursively defined by
detA =n∑
j=1ai j Ai j =
n∑i=1
ai j Ai j (4.97)
for any i (row expansion) or j (column expansion), with i , j = 1, . . . ,n. For
1×1 matrices (i.e., scalars), detA = a11.
4.16.2 Properties
The following properties of determinants are mentioned without proof:
(i) If A and B are square matrices of the same order, then detAB = (detA)(detB).
(ii) If either two rows or two columns are exchanged, then the determinant
is multiplied by a factor “−1.”
(iii) det(AT ) = detA.
(iv) The determinant detA of a matrix A is non-zero if and only if A is
invertible. In particular, if A is not invertible, detA = 0. If A has an
inverse matrix A−1, then det(A−1) = (detA)−1.
(v) Multiplication of any row or column with a factor α results in a deter-
minant which is α times the original determinant.
4.17 Trace
4.17.1 DefinitionThe German word for trace is Spur.
The trace of an n-by-n square matrix A = ai j , denoted by TrA, is a scalar
defined to be the sum of the elements on the main diagonal (the diagonal
from the upper left to the lower right) of A; that is (also in Dirac’s bra and
ket notation),
Tr A = a11 +a22 +·· ·+ann =n∑
i=1ai i =
n∑i=1
⟨i |A|i ⟩. (4.98)
In quantum mechanics, traces can be realized via an orthonormal basis
B= e1, . . . ,en by “sandwiching” an operator A between all basis elements
– thereby effectively taking the diagonal components of A with respect to
the basis B – and summing over all these scalar compontents; that is,
Tr A=n∑
i=1⟨ei |A|ei ⟩. (4.99)
66 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
4.17.2 Properties
The following properties of traces are mentioned without proof:
(i) Tr(A+B) = TrA+TrB ;
(ii) Tr(αA) =αTrA, with α ∈C;
(iii) Tr(AB) = Tr(B A), hence the trace of the commutator vanishes; that is,
Tr([A,B ]) = 0;
(iv) TrA = TrAT ;
(v) Tr(A⊗B) = (TrA)(TrB);
(vi) the trace is the sum of the eigenvalues of a normal operator;
(vii) det(e A) = eTrA ;
(viii) the trace is the derivative of the determinant at the identity;
(xi) the complex conjugate of the trace of an operator is equal to the trace
of its adjoint; that is (TrA) = Tr(A†);
(xi) the trace is invariant under rotations of the basis and (because of
commutativity of scalar addition) under cyclic permutations.
A trace class operator is a compact operator for which a trace is finite
and independent of the choice of basis.
4.18 Adjoint
4.18.1 Definition
Let V be a vector space and let y be any element of its dual space V∗.
For any linear transformation A, consider the bilinear functional y′(x) = Here [·, ·] is the bilinear functional, not thecommutator.[x,y′] = [Ax,y] Let the adjoint transformation A† be defined by
[x,A∗y] = [Ax,y]. (4.100)
In real inner product spaces,
[x,AT y] = [Ax,y]. (4.101)
In complex inner product spaces,
[x,A†y] = [Ax,y]. (4.102)
4.18.2 Properties
We mention without proof that the adjoint operator is a linear operator.
Furthermore, 0† = 0, 1† = 1, (A+B)† = A† +B†, (αA)† = αA†, (AB)† = B†A†,
and (A−1)† = (A†)−1; as well as (in finite dimensional spaces)
A†† =A. (4.103)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 67
4.18.3 Matrix notation
In matrix notation and in complex vector space with the dot product, note
that there is a correspondence with the inner product (cf. page 50) so that,
for all z ∈V and for all x ∈V, there exist a unique y ∈V with
[Ax,z]
= ⟨Ax | y⟩= ⟨y |Ax⟩= yi Ai j x j
= yi ATj i x j
= x AT
y
[x,A†z]
= ⟨x |A†y⟩= xi A†
i j y j
= x A† y ,
(4.104)
and hence
A† = (A)T = AT , or A†i j = A j i . (4.105)
In words: in matrix notation, the adjoint transformation is just the trans-
pose of the complex conjugate of the original matrix.
4.19 Self-adjoint transformation
The following definition yields some analogy to real numbers as compared
to complex numbers (“a complex number z is real if z = z”), expressed in
terms of operators on a complex vector space.
An operator A on a linear vector space V is called self-adjoint, if
A∗ =A. (4.106)
In real inner product spaces, self adoint operators are called symmetric,
since they are symmetric with respect to transpositions; that is,
A∗ =AT =A. (4.107)
In complex inner product spaces, self adoint operators are called Her-
mitian, since they are identical with respect to Hermitian conjugation
(Transposition of the matrix and complex conjugation of its entries); that
is,
A∗ =A† =A. (4.108)
In what follows, we shall consider only the latter case and identify self-
adjoint operators with Hermitian ones. In terms of matrices, a matrix A
corresponding to an operator A in some fixed basis is self-adjoint if
A† ≡ (Ai j )T = A j i = Ai j ≡ A. (4.109)
68 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
That is, suppose Ai j is the matrix representation corresponding to a linear
transformation A in some basis B, then the Hermitian matrix A∗ = A† to
the dual basis B∗ is (Ai j )T .
For the sake of an example, consider again the Pauli spin matrices
σ1 =σx =(
0 1
1 0
),
σ2 =σy =(
0 −i
i 0
),
σ1 =σz =(
1 0
0 −1
).
(4.110)
which, together with unity, i.e., I2 = diag(1,1), are all self-adjoint.
The following operators are not self-adjoint:(0 1
0 0
),
(1 1
0 0
),
(1 0
i 0
),
(0 i
i 0
). (4.111)
4.20 Positive transformation
A linear transformation A on an inner product space V is positive, that
is in symbols A ≥ 0, if it is self-adjoint, and if ⟨Ax | x⟩ ≥ 0 for all x ∈ V. If
⟨Ax | x⟩ = 0 implies x = 0, A is called strictly positive.
4.21 Permutation
Permutation (matrices) are the “classical analogues” 16 of unitary trans- 16 David N. Mermin. Lecture notes onquantum computation. 2002-2008.URL http://people.ccmr.cornell.
edu/~mermin/qcomp/CS483.html; andDavid N. Mermin. Quantum ComputerScience. Cambridge University Press,Cambridge, 2007. ISBN 9780521876582.URL http://people.ccmr.cornell.edu/
~mermin/qcomp/CS483.html
formations (matrices) which will be introduced briefly. The permutation
matrices are defined by the requirement that they only contain a single
nonvanishing entry “1” per row and column; all the other row and column
entries vanish “0.” For example, the matrices In = diag(1, . . . ,1︸ ︷︷ ︸n times
), or
σ1 =(
0 1
1 0
),σ1 =
0 1 0
1 0 0
0 0 1
are permutation matrices.
Note that from the definition and from matrix multiplication follows
that, if P is a permutation matrix, then PP T = P T = In . That is, P T repre-
sents the inverse element of P .
Note further that any permuation matrix can be interpreted in terms
of row and column vectors. The set of all these row and column vectors
constitute the Cartesian standard basis of n-dimensional vector space,
with permuted elements.
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 69
Note also that, if P and Q are permutation matrices, so is PQ and QP .
The set of all n! permutation (n ×n)−matrices correponding to permu-
tations of n elements of 1,2, . . . ,n form the symmetric group Sn , with Inbeing the identity element.
4.22 Orthonormal (orthogonal) transformations
An orthonormal or orthogonal transformation R is a linear transformation
whose corresponding square matrix R has real-valued entries and mu-
tually ortogonal, normalized row (or, equivalently, column) vectors. As a
consequence,
RRT = RT R = I, or R−1 = RT . (4.112)
If detR = 1, R corresponds to a rotation. If detR = −1, R corresponds to a
rotation and a reflection. A reflection is an isometry (a distance preserving
map) with a hyperplane as set of fixed points.
Orthonomal transformations R are “real valued cases” of the more
general unitary transformations discussed next. They preserve a symmetric
inner product; that is, ⟨Rx |Ry⟩ = ⟨x | y⟩ for all x,y ∈VAs a two-dimensional example for rotations in the plane R2, take the
rotation matrix in Eq. (4.86) representing a rotation of the basis by an angle
ϕ.
Permutation matrices represent orthonormal transformations.
4.23 Unitary transformations and isometriesFor proofs and additional information see§73 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.23.1 Definition
Note that a complex number z has absolute value one if z = 1/z, or zz = 1.
In analogy to this “modulus one” behavior, consider unitary transforma-
tions, or, used synonymuously, (one-to-one) isometries U for which
U∗ =U† =U−1, or UU† =U†U= I. (4.113)
Alternatively, we mention without proof that the following conditions are
equivalent:
(i) ⟨Ux |Uy⟩ = ⟨x | y⟩ for all x,y ∈V;
(ii) ‖Ux‖ = ‖x‖ for all x ∈V;
Unitary transformations can also be defined via permutations preserving
the scalar product. That is, functions such as f : x 7→ x ′ = αx with α 6= e iϕ,
ϕ ∈ R, do not correspond to a unitary transformation in a one-dimensional
Hilbert space, as the scalar product f : ⟨x|y⟩ 7→ ⟨x ′|y ′⟩ = |α|2⟨x|y⟩ is not
preserved; whereas if α is a modulus of one; that is, with α = e iϕ, ϕ ∈ R,
|α|2 = 1, and the scalar product is preseved. Thus, u : x 7→ x ′ = e iϕx, ϕ ∈ R,
represents a unitary transformation.
70 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
4.23.2 Characterization of change of orthonormal basis
Let B = f1, f2, . . . , fn be an orthonormal basis of an n-dimensional inner
product space V. If U is an isometry, then UB= Uf1,Uf2, . . . ,Ufn is also an
orthonormal basis of V. (The converse is also true.)
4.23.3 Characterization in terms of orthonormal basis
A complex matrix U is unitary if and only if its row (or column) vectors
form an orthonormal basis.
This can be readily verified 17 by writing U in terms of two orthonormal 17 J. Schwinger. Unitary operators bases.In Proceedings of the National Academy ofSciences (PNAS), volume 46, pages 570–579,1960. D O I : 10.1073/pnas.46.4.570. URLhttp://dx.doi.org/10.1073/pnas.46.
4.570
bases B= e1,e2, . . . ,en B′ = f1, f2, . . . , fn as
Ue f =n∑
i=1e†
i fi =n∑
i=1|ei ⟩⟨fi |. (4.114)
Together with U f e =∑n
i=1 f†i ei =∑n
i=1 |fi ⟩⟨ei | we form
ek Ue f
= ek
n∑i=1
e†i fi
=n∑
i=1(ek e†
i )fi
=n∑
i=1δki fi
= fk .
(4.115)
In a similar way we find that
Ue f f†k = e†
k ,
fk U f e = ek ,
U f e e†k = f†
k .
(4.116)
Moreover,
Ue f U f e
=n∑
i=1
n∑j=1
(|ei ⟩⟨fi |)(|f j ⟩⟨e j |)
=n∑
i=1
n∑j=1
|ei ⟩δi j ⟨e j |
=n∑
i=1|ei ⟩⟨ei |
= I.
(4.117)
In a similar way we obtain U f eUe f = I. Since
U†e f =
n∑i=1
f†i (e†
i )† =n∑
i=1f†
i ei =U f e , (4.118)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 71
we obtain that U†e f = (Ue f )−1 and U†
f e = (U f e )−1.
Note also that the composition holds; that is, Ue f U f g =Ueg .
If we identify one of the bases B and B′ by the Cartesian standard basis,
it becomes clear that, for instance, every unitary operator U can be written
in terms of an orthonormal basis B= f1, f2, . . . , fn by “stacking” the vectors
of that orthonormal basis “on top of each other;” that is For proofs and additional information see§5.11.3, Theorem 5.1.5 and subsequentCorollary in
Satish D. Joglekar. Mathematical Physics:The Basics. CRC Press, Boca Raton, Florida,2007
U=
f1
f2...
fn
. (4.119)
Thereby the vectors of the orthonormal basis B serve as the rows of U.
Also, every unitary operator U can be written in terms of an orthonor-
mal basis B = f1, f2, . . . , fn by “pasting” the (transposed) vectors of that
orthonormal basis “one after another;” that is
U= (fT
1 , fT2 , · · · , fT
n
). (4.120)
Thereby the (transposed) vectors of the orthonormal basis B serve as the
columns of U.
Note also that any permutation of vectors in B would also yield unitary
matrices.
4.24 Perpendicular projectorsFor proofs and additional information see§42, §75 & §76 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
Perpendicular projections are associated with a direct sum decomposition of
the vector space V; that is,
M⊕M⊥ =V. (4.121)
Let E = PM denote the projector on M along M⊥. The following proposi-
tions are stated without proof.
A linear transformation E is a perpendicular projector if and only if
E=E2 =E∗.
Perpendicular projectors are positive linear transformations, with
‖Ex‖ ≤ ‖x‖ for all x ∈V. Conversely, if a linear transformation E is idempo-
tent; that is, E2 = E, and ‖Ex‖ ≤ ‖x‖ for all x ∈V, then is self-adjoint; that is,
E=E∗.
Recall that for real inner product spaces, the self-adjoint operator can
be identified with a symmetric operator E = ET , whereas for complex
inner product spaces, the self-adjoint operator can be identified with a
Hermitean operator E=E†.
If E1,E2, . . . ,En are (perpendicular) projectors, then a necessary and
sufficient condition that E=E1+E2+·· ·+En be a (perpendicular) projector
is that Ei E j = δi j Ei = δi j E j ; and, in particular, Ei E j = 0 whenever i 6= j ;
that is, that all Ei are pairwise orthogonal.
72 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For a start, consider just two projectors E1 and E2. Then we can assert
that E1 +E2 is a projector if and only if E1E2 =E2E1 = 0.
Because, for E1 +E2 to be a projector, it must be idempotent; that is,
(E1 +E2)2 = (E1 +E2)(E1 +E2) =E21 +E1E2 +E2E1 +E2
2 =E1 +E2. (4.122)
As a consequence, the cross-product terms in (4.122) must vanish; that is,
E1E2 +E2E1 = 0. (4.123)
Multiplication of (4.123) with E1 from the left and from the right yields
E1E1E2 +E1E2E1 = 0,
E1E2 +E1E2E1 = 0; and
E1E2E1 +E2E1E1 = 0,
E1E2E1 +E2E1 = 0.
(4.124)
Subtraction of the resulting pair of equations yields
E1E2 −E2E1 = [E1,E2] = 0, (4.125)
or
E1E2 =E2E1. (4.126)
Hence, in order for the cross-product terms in Eqs. (4.122 ) and (4.123) to
vanish, we must have
E1E2 =E2E1 = 0. (4.127)
Proving the reverse statement is straightforward, since (4.127) implies
(4.122).
A generalisation by induction to more than two projectors is straight-
forward, since, for instance, (E1 +E2)E3 = 0 implies E1E3 +E2E3 = 0.
Multiplication with E1 from the left yields E1E1E3 +E1E2E3 =E1E3 = 0.
4.25 Proper value or eigenvalueFor proofs and additional information see§54 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.25.1 Definition
A scalar λ is a proper value or eigenvalue, and a non-zero vector x is a
proper vector or eigenvector of a linear transformation A if
Ax =λx =λIx. (4.128)
In an n-dimensional vector space V The set of the set of eigenvalues and
the set of the associated eigenvectors λ1, . . . ,λk , x1, . . . ,xn of a linear
transformation A form an eigensystem of A.
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 73
4.25.2 Determination
Since the eigenvalues and eigenvectors are those scalars λ vectors x for
which Ax = λx, this equation can be rewritten with a zero vector on the
right side of the equation; that is (I = diag(1, . . . ,1) stands for the identity
matrix),
(A−λI)x = 0. (4.129)
Suppose that A−λI is invertible. Then we could formally write x = (A−λI)−10; hence x must be the zero vector.
We are not interested in this trivial solution of Eq. (4.129). Therefore,
suppose that, contrary to the previous assumption, A−λI is not invert-
ible. We have mentioned earlier (without proof) that this implies that its
determinant vanishes; that is,
det(A−λI) = |A−λI| = 0. (4.130)
This is called the sekular determinant; and the corresponding equation
after expansion of the determinant is called the sekular equation or charac-
teristic equation. Once the eigenvalues, that is, the roots (i.e., the solutions)
of this equation are determined, the eigenvectors can be obtained one-by-
one by inserting these eigenvalues one-by-one into Eq. (4.129).
For the sake of an example, consider the matrix
A =
1 0 1
0 1 0
1 0 1
. (4.131)
The secular determinant is∣∣∣∣∣∣∣1−λ 0 1
0 1−λ 0
1 0 1−λ
∣∣∣∣∣∣∣= 0,
yielding the characteristic equation (1−λ)3 − (1−λ) = (1−λ)[(1−λ)2 −1] =(1−λ)[λ2−2λ] =−λ(1−λ)(2−λ) = 0, and therefore three eigenvalues λ1 = 0,
λ2 = 1, and λ3 = 2 which are the roots of λ(1−λ)(2−λ) = 0.
Let us now determine the eigenvectors of A, based on the eigenvalues.
Insertion λ1 = 0 into Eq. (4.129) yields1 0 1
0 1 0
1 0 1
−
0 0 0
0 0 0
0 0 0
x1
x2
x3
=
1 0 1
0 1 0
1 0 1
x1
x2
x3
=
0
0
0
; (4.132)
therefore x1 + x3 = 0 and x2 = 0. We are free to choose any (nonzero)
x1 = −x3, but if we are interested in normalized eigenvectors, we obtain
x1 = (1/p
2)(1,0,−1)T .
74 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Insertion λ2 = 1 into Eq. (4.129) yields1 0 1
0 1 0
1 0 1
−
1 0 0
0 1 0
0 0 1
x1
x2
x3
=
0 0 1
0 0 0
1 0 0
x1
x2
x3
=
0
0
0
; (4.133)
therefore x1 = x3 = 0 and x2 is arbitrary. We are again free to choose any
(nonzero) x2, but if we are interested in normalized eigenvectors, we obtain
x2 = (0,1,0)T .
Insertion λ3 = 2 into Eq. (4.129) yields1 0 1
0 1 0
1 0 1
−
2 0 0
0 2 0
0 0 2
x1
x2
x3
=
−1 0 1
0 −1 0
1 0 −1
x1
x2
x3
=
0
0
0
; (4.134)
therefore −x1 + x3 = 0 and x2 = 0. We are free to choose any (nonzero)
x1 = x3, but if we are once more interested in normalized eigenvectors, we
obtain x1 = (1/p
2)(1,0,1)T .
Note that the eigenvectors are mutually orthogonal. We can construct
the corresponding orthogonal projectors by the dyadic product of the
eigenvectors; that is,
E1 = x1 ⊗xT1 = 1
2(1,0,−1)T (1,0,−1) = 1
2
1(1,0,−1)
0(1,0,−1)
−1(1,0,−1)
= 1
2
1 0 −1
0 0 0
−1 0 1
E2 = x2 ⊗xT2 = (0,1,0)T (0,1,0) =
0(0,1,0)
1(0,1,0)
0(0,1,0)
=
0 0 0
0 1 0
0 0 0
E3 = x3 ⊗xT3 = 1
2(1,0,1)T (1,0,1) = 1
2
1(1,0,1)
0(1,0,1)
1(1,0,1)
= 1
2
1 0 1
0 0 0
1 0 1
(4.135)
Note also that A can be written as the sum of the products of the eigen-
values with the associated projectors; that is (here, E stands for the corre-
sponding matrix), A = 0E1 + 1E2 + 2E3. Also, the projectors are mutually
orthogonal – that is, E1E2 = E1E3 = E2E3 = 0 – and add up to unity; that is,
E1 +E2 +E3 = I.If the eigenvalues obtained are not distinct und thus some eigenvalues
are degenerate, the associated eigenvectors traditionally – that is, by con-
vention and not necessity – are chosen to be mutually orthogonal. A more
formal motivation will come from the spectral theorem below.
For the sake of an example, consider the matrix
B =
1 0 1
0 2 0
1 0 1
. (4.136)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 75
The secular determinant yields∣∣∣∣∣∣∣1−λ 0 1
0 2−λ 0
1 0 1−λ
∣∣∣∣∣∣∣= 0,
which yields the characteristic equation (2 −λ)(1 −λ)2 + [−(2 −λ)] =(2−λ)[(1−λ)2 − 1] = −λ(2−λ)2 = 0, and therefore just two eigenvalues
λ1 = 0, and λ2 = 2 which are the roots of λ(2−λ)2 = 0.
Let us now determine the eigenvectors of B , based on the eigenvalues.
Insertion λ1 = 0 into Eq. (4.129) yields1 0 1
0 2 0
1 0 1
−
0 0 0
0 0 0
0 0 0
x1
x2
x3
=
1 0 1
0 2 0
1 0 1
x1
x2
x3
=
0
0
0
; (4.137)
therefore x1 + x3 = 0 and x2 = 0. Again we are free to choose any (nonzero)
x1 = −x3, but if we are interested in normalized eigenvectors, we obtain
x1 = (1/p
2)(1,0,−1)T .
Insertion λ2 = 2 into Eq. (4.129) yields1 0 1
0 2 0
1 0 1
−
2 0 0
0 2 0
0 0 2
x1
x2
x3
=
−1 0 1
0 0 0
1 0 −1
x1
x2
x3
=
0
0
0
; (4.138)
therefore x1 = x3; x2 is arbitrary. We are again free to choose any values of
x1, x3 and x2 as long x1 = x3 as well as x2 are satisfied. Take, for the sake
of choice, the orthogonal normalized eigenvectors x2,1 = (0,1,0)T and
x2,2 = (1/p
2)(1,0,1)T , which are also orthogonal to x1 = (1/p
2)(1,0,−1)T .
Note again that we can find the corresponding orthogonal projectors by
the dyadic product of the eigenvectors; that is, by
E1 = x1 ⊗xT1 = 1
2(1,0,−1)T (1,0,−1) = 1
2
1(1,0,−1)
0(1,0,−1)
−1(1,0,−1)
= 1
2
1 0 −1
0 0 0
−1 0 1
E2,1 = x2,1 ⊗xT2,1 = (0,1,0)T (0,1,0) =
0(0,1,0)
1(0,1,0)
0(0,1,0)
=
0 0 0
0 1 0
0 0 0
E2,2 = x2,2 ⊗xT2,2 =
1
2(1,0,1)T (1,0,1) = 1
2
1(1,0,1)
0(1,0,1)
1(1,0,1)
= 1
2
1 0 1
0 0 0
1 0 1
(4.139)
Note also that B can be written as the sum of the products of the eigen-
values with the associated projectors; that is (here, E stands for the corre-
sponding matrix), B = 0E1+2(E1,2+E1,2). Again, the projectors are mutually
orthogonal – that is, E1E2 = E1E3 = E2E3 = 0 – and add up to unity; that is,
E1 +E2 +E3 = I. This leads us to the much more general spectral theorem.
76 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Another, extreme, example would be the unit matrix in n dimensions;
that is, In = diag(1, . . . ,1︸ ︷︷ ︸n times
), which has an n-fold degenerate eigenvalue 1
corresponding to a solution to (1−λ)n = 0. The corresponding projection
operator is In . [Note that (In)2 = In and thus In is a projector.] If one (some-
how arbitrarily but conveniently) chooses a decomposition of unity In into
projectors corresponding to the standard basis (any other orthonormal
basis would do as well), then
In = diag(1,0,0, . . . ,0)+diag(0,1,0, . . . ,0)+·· ·+diag(0,0,0, . . . ,1)
1 0 0 · · · 0
0 1 0 · · · 0
0 0 1 · · · 0...
0 0 0 · · · 1
=
1 0 0 · · · 0
0 0 0 · · · 0
0 0 0 · · · 0...
0 0 0 · · · 0
+
+
0 0 0 · · · 0
0 1 0 · · · 0
0 0 0 · · · 0...
0 0 0 · · · 0
+·· ·+
0 0 0 · · · 0
0 0 0 · · · 0
0 0 0 · · · 0...
0 0 0 · · · 1
,
(4.140)
where all the matrices in the sum carrying one nonvanishing entry “1” in
their diagonal are projectors. Note that
ei = |ei ⟩≡ ( 0, . . . ,0︸ ︷︷ ︸
i−1 times
,1, 0, . . . ,0︸ ︷︷ ︸n−i times
)
≡ diag( 0, . . . ,0︸ ︷︷ ︸i−1 times
,1, 0, . . . ,0︸ ︷︷ ︸n−i times
)
≡Ei .
(4.141)
The following theorems are enumerated without proofs.
If A is a self-adjoint transformation on an inner product space, then ev-
ery proper value (eigenvalue) of A is real. If A is positive, or stricly positive,
then every proper value of A is positive, or stricly positive, respectively
Due to their idempotence EE=E, projectors have eigenvalues 0 or 1.
Every eigenvalue of an isometry has absolute value one.
If A is either a self-adjoint transformation or an isometry, then proper
vectors of A belonging to distinct proper values are orthogonal.
4.26 Normal transformation
A transformation A is called normal if it commutes with its adjoint; that is,
[A,A∗] =AA∗−A∗A= 0.
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 77
It follows from their definition that Hermitian and unitary transforma-
tions are normal. That is, A∗ =A†, and for Hermitian operators, A=A†, and
thus [A,A†] = AA−AA = (A)2 − (A)2 = 0. For unitary operators, A† = A−1,
and thus [A,A†] =AA−1 −A−1A= I− I= 0.
We mention without proof that a normal transformation on a finite-
dimensional unitary space is (i) Hermitian, (ii) positive, (iii) strictly posi-
tive, (iv) unitary, (v) invertible, (vi) idempotent if and only if all its proper
values are (i) real, (ii) positive, (iii) strictly positive, (iv) of absolute value
one, (v) different from zero, (vi) equal to zero or one.
4.27 SpectrumFor proofs and additional information see§78 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.27.1 Spectral theorem
Let V be an n-dimensional linear vector space. The spectral theorem states
that to every self-adjoint (more general, normal) transformation A on an
n-dimensional inner product space there correspond real numbers, the
spectrum λ1,λ2, . . . ,λk of all the eigenvalues of A, and their associated
orthogonal projectors E1,E2, . . . ,Ek where 0 < k ≤ n is a strictly positive
integer so that
(i) the λi are pairwise distinct,
(ii) the Ei are pairwise orthogonal and different from 0,
(iii)∑k
i=1 Ei = In , and
(iv) A=∑ki=1λi Ei is the spectral form of A.
4.27.2 Composition of the spectral form
If the spectrum of a Hermitian (or, more general, normal) operator A is
nondegenerate, that is, k = n, then the i th projector can be written as the
dyadic or tensor product Ei = xi ⊗xTi of the i th normalized eigenvector xi
of A. In this case, the set of all normalized eigenvectors x1, . . . ,xn is an or-
thonormal basis of the vector space V. If the spectrum of A is degenerate,
then the projector can be chosen to be the orthogonal sum of projectors
corresponding to orthogonal eigenvectors, associated with the same eigen-
values.
Furthermore, for a Hermitian (or, more general, normal) operator A, if
1 ≤ i ≤ k, then there exist polynomials with real coefficients, such as, for
instance,
pi (t ) = ∏1 ≤ j ≤ k
j 6= i
t −λ j
λi −λ j(4.142)
so that pi (λ j ) = δi j ; moreover, for every such polynomial, pi (A) =Ei .
78 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For a proof, it is not too difficult to show that pi (λi ) = 1, since in this
case in the product of fractions all numerators are equal to denominators,
and pi (λ j ) = 0 for j 6= i , since some numerator in the product of fractions
vanishes.
Now, substituting for t the spectral form A = ∑ki=1λi Ei of A, as well as
decomposing unity in terms of the projectors Ei in the spectral form of A;
that is, In =∑ki=1 Ei , yields
pi (A) = ∏1≤ j≤k, j 6=i
A−λ j In
λi −λ j
= ∏1≤ j≤k, j 6=i
∑kl=1λl El −λ j
∑kl=1 El
λi −λ j
[because of idempotence and pairwise orthogonality of the projectorsEl ]
= ∏1≤ j≤k, j 6=i
∑kl=1 El (λl −λ j )
λi −λ j
=k∑
l=1El
∏1≤ j≤k, j 6=i
λl −λ j
λi −λ j
=k∑
l=1Elδl i =Ei .
(4.143)
With the help of the polynomial pi (t ) defined in Eq. (4.142), which
requires knowledge of the eigenvalues, the spectral form of a Hermitian (or,
more general, normal) operator A can thus be rewritten as
A=k∑
i=1λi pi (A) =
k∑i=1
λi∏
1≤ j≤k, j 6=i
A−λ j In
λi −λ j. (4.144)
That is, knowledge of all the eigenvalues entails construction of all the
projectors in the spectral decomposition of a normal transformation.
For the sake of an example, consider again the matrix
A =
1 0 1
0 1 0
1 0 1
(4.145)
and the associated Eigensystem
λ1,λ2,λ3 , E1,E2,E3
=
0,1,2 ,
1
2
1 0 −1
0 0 0
−1 0 1
,
0 0 0
0 1 0
0 0 0
,1
2
1 0 1
0 0 0
1 0 1
.(4.146)
The projectors associated with the eigenvalues, and, in particular, E1,
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 79
can be obtained from the set of eigenvalues 0,1,2 by
p1(A) =(
A−λ2I
λ1 −λ2
)(A−λ3I
λ1 −λ3
)
=
1 0 1
0 1 0
1 0 1
−1 ·
1 0 0
0 1 0
0 0 1
(0−1)·
1 0 1
0 1 0
1 0 1
−2 ·
1 0 0
0 1 0
0 0 1
(0−2)
= 1
2
0 0 1
0 0 0
1 0 0
−1 0 1
0 −1 0
1 0 −1
= 1
2
1 0 −1
0 0 0
−1 0 1
=E1.
(4.147)
For the sake of another, degenerate, example consider again the matrix
B =
1 0 1
0 2 0
1 0 1
(4.148)
Again, the projectors E1,E2 can be obtained from the set of eigenvalues
0,2 by
p1(A) = A−λ2I
λ1 −λ2=
1 0 1
0 2 0
1 0 1
−2 ·
1 0 0
0 1 0
0 0 1
(0−2)
= 1
2
1 0 −1
0 0 0
−1 0 1
=E1,
p2(A) = A−λ1I
λ2 −λ1=
1 0 1
0 2 0
1 0 1
−0 ·
1 0 0
0 1 0
0 0 1
(2−0)
= 1
2
1 0 1
0 2 0
1 0 1
=E2.
(4.149)
Note that, in accordance with the spectral theorem, E1E2 = 0, E1 +E2 = Iand 0 ·E1 +2 ·E2 = B .
4.28 Functions of normal transformations
Suppose A = ∑ki=1λi Ei is a normal transformation in its spectral form. If f
is an arbitrary complex-valued function defined at least at the eigenvalues
of A, then a linear transformation f (A) can be defined by
f (A) = f
(k∑
i=1λi Ei
)=
k∑i=1
f (λi )Ei . (4.150)
Note that, if f has a polynomial expansion such as analytic functions, then
orthogonality and idempotence of the the projectors Ei in the spectral
form guarantees this kind of “linearization.”
80 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For the definition of the “square root” for every positve operator A),
considerp
A=k∑
i=1
√λi Ei . (4.151)
With this definition,(p
A)2 =p
Ap
A=A.
4.29 Decomposition of operatorsFor proofs and additional information see§83 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
4.29.1 Standard decomposition
In analogy to the decomposition of every imaginary number z =ℜz + iℑz
with ℜz,ℑz ∈ R, every arbitrary transformation A on a finite-dimensional
vector space can be decomposed into two Hermitian operators B and Csuch that
A=B+ iC; with
B= 1
2(A+A†),
C= 1
2i(A−A†).
(4.152)
Proof by insertion; that is,
A=B+ iC
= 1
2(A+A†)+ i
[1
2i(A−A†)
],
B† =[
1
2(A+A†)
]†
= 1
2
[A† + (A†)†
]= 1
2
[A† +A
]=B,
C† =[
1
2i(A−A†)
]†
=− 1
2i
[A† − (A†)†
]=− 1
2i
[A† −A
]=C.
(4.153)
4.29.2 Polar representation
In analogy to the polar representation of every imaginary number z = Re iϕ
with R,ϕ ∈ R, R > 0, 0 ≤ ϕ < 2π, every arbitrary transformation A on a
finite-dimensional inner product space can be decomposed into a unique
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 81
positive transform P and an isometry U, such that A = UP. If A is invertible,
then U is uniquely determined by A. A necessary and sufficient condition
that A is normal is that UP=PU.
4.29.3 Decomposition of isometries
Any unitary or orthogonal transformation in finite-dimensional inner
product space can be composed from a succession of two-parameter uni-
tary transformations in two-dimensional subspaces, and a multiplication
of a single diagonal matrix with elements of modulus one in an algorith-
mic, constructive and tractable manner. The method is similar to Gaussian
elimination and facilitates the parameterization of elements of the unitary
group in arbitrary dimensions (e.g., Ref. 18, Chapter 2). 18 F. D. Murnaghan. The Unitary and Rota-tion Groups. Spartan Books, Washington,D.C., 1962
It has been suggested to implement these group theoretic results by
realizing interferometric analogues of any discrete unitary and Hermitian
operator in a unified and experimentally feasible way by “generalized beam
splitters” 19. 19 M. Reck, Anton Zeilinger, H. J. Bernstein,and P. Bertani. Experimental realizationof any discrete unitary operator. PhysicalReview Letters, 73:58–61, 1994. D O I :10.1103/PhysRevLett.73.58. URL http:
//dx.doi.org/10.1103/PhysRevLett.
73.58; and M. Reck and Anton Zeilinger.Quantum phase tracing of correlatedphotons in optical multiports. In F. DeMartini, G. Denardo, and Anton Zeilinger,editors, Quantum Interferometry, pages170–177, Singapore, 1994. World Scientific
4.29.4 Singular value decomposition
The singular value decomposition (SVD) of an (m ×n) matrix A is a factor-
ization of the form
A=UΣV, (4.154)
where U is a unitary (m ×m) matrix (i.e. an isometry), V is a unitary (n ×n)
matrix, and Σ is a unique (m ×n) diagonal matrix with nonnegative real
numbers on the diagonal; that is,
Σ=
σ1 | .... . . | · · · 0 · · ·
σr | ...
− − − − − −... | ...
· · · 0 · · · | · · · 0 · · ·... | ...
. (4.155)
The entries σ1 ≥σ2 · · · ≥σr >0 of Σ are called singular values of A. No proof
is presented here.
4.29.5 Schmidt decomposition of the tensor product of two vectors
Let U and V be two linear vector spaces of dimension n ≥ m and m, re-
spectively. Then, for any vector z ∈U⊗V in the tensor product space, there
exist orthonormal basis sets of vectors u1, . . . ,un ⊂ U and v1, . . . ,vm ⊂V
82 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
such that z =∑mi=1σi ui ⊗vi , where the σi s are nonnegative scalars and the
set of scalars is uniquely determined by z.
Equivalently 20, suppose that |z⟩ is some tensor product contained in 20 M. A. Nielsen and I. L. Chuang. QuantumComputation and Quantum Information.Cambridge University Press, Cambridge,2000
the set of all tensor products of vectors U⊗V of two linear vector spaces U
and V. Then there exist orthonormal vectors |ui ⟩ ∈U and |v j ⟩ ∈V so that
|z⟩ =∑iσi |ui ⟩|vi ⟩, (4.156)
where the σi s are nonnegative scalars; if |z⟩ is normalized, then the σi s are
satisfying∑
i σ2i = 1; they are called the Schmidt coefficients.
For a proof by reduction to the singular value decomposition, let |i ⟩ and
| j ⟩ be any two fixed orthonormal bases of U and V, respectively. Then, |z⟩can be expanded as |z⟩ = ∑
i j ai j |i ⟩| j ⟩, where the ai j s can be interpreted
as the components of a matrix A. A can then be subjected to a singular
value decomposition A = UΣV, or, written in index form [note that Σ =diag(σ1, . . . ,σn) is a diagonal matrix], ai j = ∑
l ui lσl vl j ; and hence |z⟩ =∑i j l ui lσl vl j |i ⟩| j ⟩. Finally, by identifying |ul ⟩ =
∑i ui l |i ⟩ as well as |vl ⟩ =∑
l vl j | j ⟩ one obtains the Schmidt decompsition (4.156). Since ui l and vl j
represent unitary martices, and because |i ⟩ as well as | j ⟩ are orthonormal,
the newly formed vectors |ul ⟩ as well as |vl ⟩ form orthonormal bases as
well. The sum of squares of the σi ’s is one if |z⟩ is a unit vector, because
(note that σi s are real-valued) ⟨z|z⟩ = 1 = ∑l mσlσm⟨ul |um⟩⟨vl |vm⟩ =∑
lmσlσmδlm =∑l σ
2l .
Note that the Schmidt decomposition cannot, in general, be extended
for more factors than two. Note also that the Schmidt decomposition
needs not be unique 21; in particular if some of the Schmidt coefficients σi21 Artur Ekert and Peter L. Knight. En-tangled quantum systems and theSchmidt decomposition. Ameri-can Journal of Physics, 63(5):415–423,1995. D O I : 10.1119/1.17904. URLhttp://dx.doi.org/10.1119/1.17904
are equal. For the sake of an example for nonuniqueness of the Schmidt
decomposition, take, for instance, the representation of the Bell state with
the two bases
|e1⟩ ≡ (1,0), |e2⟩ ≡ (0,1) and|f1⟩ ≡ 1p
2(1,1), |f2⟩ ≡ 1p
2(−1,1)
.
(4.157)
as follows:
|Ψ−⟩ = 1p2
(|e1⟩|e2⟩− |e2⟩|e1⟩)
≡ 1p2
[(1(0,1),0(0,1))− (0(1,0),1(1,0))] = 1p2
(0,1,−1,0);
|Ψ−⟩ = 1p2
(|f1⟩|f2⟩− |f2⟩|f1⟩)
≡ 1
2p
2[(1(−1,1),1(−1,1))− (−1(1,1),1(1,1))]
≡ 1
2p
2[(−1,1,−1,1)− (−1,−1,1,1)] = 1p
2(0,1,−1,0).
(4.158)
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 83
4.30 Commutativity
For proofs and additional information see§79 & §84 in
Paul R.. Halmos. Finite-dimensional Vec-tor Spaces. Springer, New York, Heidelberg,Berlin, 1974
If A = ∑ki=1λi Ei is the spectral form of a self-adjoint transformation A on
a finite-dimensional inner product space, then a necessary and sufficient
condition (“if and only if = iff”) that a linear transformation B commutes
with A is that it commutes with each Ei , 1 ≤ i ≤ k.
Sufficiency is derived easily: whenever B commutes with all the pro-
cectors Ei , 1 ≤ i ≤ k in the spectral composition of A, then, by linearity, it
commutes with A.
Necessity follows from the fact that, if B commutes with A then it also
commutes with every polynomial of A; and hence also with pi (A) = Ei , as
shown in (4.143).
If A = ∑ki=1λi Ei and B = ∑l
j=1µi F j are the spectral forms of a self-
adjoint transformations A and B on a finite-dimensional inner product
space, then a necessary and sufficient condition (“if and only if = iff”) that
A and B commute is that the projectors Ei , 1 ≤ i ≤ k and F j , 1 ≤ j ≤ l
commute with each other; i.e.,[Ei ,F j
]=Ei F j −F j Ei = 0.
Again, sufficiency is derived easily: if F j , 1 ≤ j ≤ l occurring in the
spectral decomposition of B commutes with all the procectors Ei , 1 ≤ i ≤ k
in the spectral composition of A, then, by linearity, B commutes with A.
Necessity follows from the fact that, if F j , 1 ≤ j ≤ l commutes with Athen it also commutes with every polynomial of A; and hence also with
pi (A) = Ei , as shown in (4.143). Conversely, if Ei , 1 ≤ i ≤ k commutes with
B then it also commutes with every polynomial of B; and hence also with
the associated polynomial q j (A) =E j , as shown in (4.143).
If Ex = |x⟩⟨x| and Ey = |y⟩⟨y| are two commuting projectors (into one-
dimensional subspaces of V) corresponding to the normalized vectors
x and y, respectively; that is, if[Ex,Ey
] = ExEy −EyEx = 0, then they are
either identical (the vectors are collinear) or orthogonal (the vectors x is
orthogonal to y).
For a proof, note that if Ex and Ey commute, then ExEy = EyEx; and
hence |x⟩⟨x|y⟩⟨y| = |y⟩⟨y|x⟩⟨x|. Thus, (⟨x|y⟩)|x⟩⟨y| = (⟨x|y⟩)|y⟩⟨x|, which,
applied to arbitrary vectors |v⟩ ∈V, is only true if either x = ±y, or if x ⊥ y
(and thus ⟨x|y⟩ = 0).
A set M = A1,A2, . . . ,Ak of self-adjoint transformations on a finite-
dimensional inner product space are mutually commuting if and only if
there exists a self-adjoint transformation R and a set of real-valued func-
tions F = f1, f2, . . . , fk of a real variable so that A1 = f1(R), A2 = f2(R), . . .,
Ak = fk (R). If such a maximal operator R exists, then it can be written as
a function of all transformations in the set M; that is, R = G(A1,A2, . . . ,Ak ),
where G is a suitable real-valued function of n variables (cf. Ref. 22, Satz 8). 22 John von Neumann. Über Funktionenvon Funktionaloperatoren. Annals ofMathematics, 32:191–226, 1931. URLhttp://www.jstor.org/stable/1968185
The maximal operator R can be interpreted as encoding or containing
all the information of a collection of commuting operators at once; stated
pointedly, rather than consider all the operators in M separately, the max-
84 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
imal operator R represents M; in a sense, the operators Ai ∈ M are all just
incomplete aspects of, or individual “lossy” (i.e., one-to-many) functional
views on, the maximal operator R.
Let us demonstrate the machinery developed so far by an example.
Consider the normal matrices
A=
0 1 0
1 0 0
0 0 0
, B=
2 3 0
3 2 0
0 0 0
, C=
5 7 0
7 5 0
0 0 11
,
which are mutually commutative; that is, [A,B] = AB−BA = [A,C] =AC−BC= [B,C] =BC−CB= 0.
The eigensystems – that is, the set of the set of eigenvalues and the set of
the associated eigenvectors – of A, B and C are
1,−1,0, (1,1,0)T , (−1,1,0)T , (0,0,1)T ,
5,−1,0, (1,1,0)T , (−1,1,0)T , (0,0,1)T ,
12,−2,11, (1,1,0)T , (−1,1,0)T , (0,0,1)T .
(4.159)
They share a common orthonormal set of eigenvectors 1p2
1
1
0
,1p2
−1
1
0
,
0
0
1
which form an orthonormal basis of R3 or C3. The associated projectors are
obtained by the dyadic or tensor products of these vectors; that is,
E1 = 1
2
1 1 0
1 1 0
0 0 0
,
E2 = 1
2
1 −1 0
−1 1 0
0 0 0
,
E3 =
0 0 0
0 0 0
0 0 1
.
(4.160)
Thus the spectral decompositions of A, B and C are
A=E1 −E2 +0E3,
B= 5E1 −E2 +0E3,
C= 12E1 −2E2 +11E3,
(4.161)
respectively.
One way to define the maximal operator R for this problem would be
R=αE1 +βE2 +γE3,
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 85
with α,β,γ ∈ R− 0 and α 6= β 6= γ 6= α. The functional coordinates fi (α),
fi (β), and fi (γ), i ∈ A,B,C, of the three functions fA(R), fB(R), and fC(R)
chosen to match the projector coefficients obtained in Eq. (4.161); that is,
A= fA(R) =E1 −E2 +0E3,
B= fB(R) = 5E1 −E2 +0E3,
C= fC(R) = 12E1 −2E2 +11E3.
(4.162)
As a consequence, the functions A, B, C need to satisfy the relations
fA(α) = 1, fA(β) =−1, fA(γ) = 0,
fB(α) = 5, fB(β) =−1, fB(γ) = 0,
fC(α) = 12, fC(β) =−2, fC(γ) = 11.
(4.163)
It is no coincidence that the projectors in the spectral forms of A, B and
C are identical. Indeed it can be shown that mutually commuting normal
operators always share the same eigenvectors; and thus also the same
projectors.
Let the set M = A1,A2, . . . ,Ak be mutually commuting normal (or
Hermitian, or self-adjoint) transformations on an n-dimensional inner
product space. Then there exists an orthonormal basis B= f1, . . . , fn such
that every f j ∈ B is an eigenvector of each of the Ai ∈ M. Equivalently,
there exist n orthogonal projectors (let the vectors f j be represented by
the coordinates which are column vectors) E j = f j ⊗ fTj such that every E j ,
1 ≤ j ≤ n occurs in the spectral form of each of the Ai ∈M.
Informally speaking, a “generic” maximal operator R on an n-dimensional
Hilbert space V can be interpreted as some orthonormal basis f1, f2, . . . , fn
of V – indeed, the n elements of that basis would have to correspond to
the projectors occurring in the spectral decomposition of the self-adjoint
operators generated by R.
Likewise, the “maximal knowledge” about a quantized physical system
– in terms of empirical operational quantities – would correspond to such
a single maximal operator; or to the orthonormal basis corresponding to
the spectral decomposition of it. Thus it might not be unreasonable to
speculate that a particular (pure) physical state is best characterized by a
particular orthonomal basis.
4.31 Measures on closed subspaces
In what follows we shall assume that all (probability) measures or states
w behave quasi-classically on sets of mutually commuting self-adjoint
operators, and in particular on orthogonal projectors.
Suppose E = E1,E2, . . . ,En is a set of mutually commuting orthogo-
nal projectors on a finite-dimensional inner product space V. Then, the
probability measure w should be additive; that is,
w(E1 +E2 · · ·+En) = w(E1)+w(E2)+·· ·+w(En). (4.164)
86 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Stated differently, we shall assume that, for any two orthogonal projec-
tors E,F so that EF = FE = 0, their sum G = E+F has expectation value
⟨G⟩ = ⟨E⟩+⟨F⟩. (4.165)
We shall consider only vector spaces of dimension three or greater, since
only in these cases two orthonormal bases can be interlinked by a common
vector – in two dimensions, distinct orthonormal bases contain distinct
basis vectors.
4.31.1 Gleason’s theorem
For a Hilbert space of dimension three or greater, the only possible form of
the expectation value of an self-adjoint operator A has the form 23 23 Andrew M. Gleason. Measures onthe closed subspaces of a Hilbert space.Journal of Mathematics and Mechanics(now Indiana University MathematicsJournal), 6(4):885–893, 1957. ISSN 0022-2518. D O I : 10.1512/iumj.1957.6.56050".URL http://dx.doi.org/10.1512/
iumj.1957.6.56050; Anatolij Dvurecen-skij. Gleason’s Theorem and Its Appli-cations. Kluwer Academic Publishers,Dordrecht, 1993; Itamar Pitowsky. Infi-nite and finite Gleason’s theorems andthe logic of indeterminacy. Journal ofMathematical Physics, 39(1):218–228,1998. D O I : 10.1063/1.532334. URLhttp://dx.doi.org/10.1063/1.532334;Fred Richman and Douglas Bridges. Aconstructive proof of Gleason’s theorem.Journal of Functional Analysis, 162:287–312, 1999. D O I : 10.1006/jfan.1998.3372.URL http://dx.doi.org/10.1006/
jfan.1998.3372; Asher Peres. QuantumTheory: Concepts and Methods. KluwerAcademic Publishers, Dordrecht, 1993; andJan Hamhalter. Quantum Measure Theory.Fundamental Theories of Physics, Vol. 134.Kluwer Academic Publishers, Dordrecht,Boston, London, 2003. ISBN 1-4020-1714-6
⟨A⟩ = Tr(ρA), (4.166)
the trace of the operator product of the density matrix (which is a positive
operator of the trace class) ρ for the system with the matrix representation
of A.
In particular, if A is a projector E corresponding to an elementary yes-no
proposition “the system has property Q,” then ⟨E⟩ = Tr(ρE) corresponds to
the probability of that property Q if the system is in state ρ.
4.31.2 Kochen-Specker theorem
For a Hilbert space of dimension three or greater, there does not exist any
two-valued probability measures interpretable as consistent, overall truth
assignment 24. As a result of the nonexistence of two-valued states, the
24 Ernst Specker. Die Logik nicht gle-ichzeitig entscheidbarer Aussagen. Di-alectica, 14(2-3):239–246, 1960. D O I :10.1111/j.1746-8361.1960.tb00422.x.URL http://dx.doi.org/10.1111/j.
1746-8361.1960.tb00422.x; and SimonKochen and Ernst P. Specker. The problemof hidden variables in quantum mechan-ics. Journal of Mathematics and Mechanics(now Indiana University Mathematics Jour-nal), 17(1):59–87, 1967. ISSN 0022-2518.D O I : 10.1512/iumj.1968.17.17004. URLhttp://dx.doi.org/10.1512/iumj.
1968.17.17004
classical strategy to construct probabilities by a convex combination of all
two-valued states fails entirely.
In Greechie diagram 25, points represent basis vectors. If they belong to
25 J. R. Greechie. Orthomodular latticesadmitting no states. Journal of Com-binatorial Theory, 10:119–132, 1971.D O I : 10.1016/0097-3165(71)90015-X.URL http://dx.doi.org/10.1016/
0097-3165(71)90015-X
the same basis, they are connected by smooth curves.
F I N I T E - D I M E N S I O N A L V E C TO R S PAC E S 87
if fi fi fifififififififififi
fififififi.
........................
..............
.......
.............
.................................. ........... .......... ........... ............ .............. .................
...........................................
.........................
.
..................................................
.................................................
...............................................
..............................................
.............................................
............................................
...........................................
.........................................
........................................
.........................................
........................................
.....................................
.....................................
......................................
.......................................
.........................................
..........................................
...........................................
............................................
.............................................
..............................................
..........................
.....................
...................
..............................
...............................................................................................................................
........................
.........................
..............
.......
.............
.....................
............................................................................................................
................................................
.
..................................................
.................................................
...............................................
..............................................
.............................................
............................................
...........................................
.........................................
........................................
.........................................
........................................
.....................................
.....................................
......................................
.......................................
.........................................
..........................................
...........................................
............................................
.............................................
..............................................
..........................
.....................
...................
.................................................... .......... ........... ............ .............. ................. ................... ......................
........................
................................................................................................................................................... .............. ................ ................... ...................... .........................
............................................
...................................................................................
........................................ ....................................... ...................................... ....................................... ........................................ ..................................................................................
..................................
........... ......................... ...................... ................... ................ .............. ..................................................................................................................................................
............................................
...........................................
.......................................................................................................................................................................................................................................................................................
.....................................................................................
M
A
L
B
J
D
K
C
IN
ER
HO
FQ
GP
a
b
c
d
e
fg
hi
The most compact way of deriving the Kochen-Specker theorem in four
dimensions has been given by Cabello 26. For the sake of demonstra- 26 Adán Cabello, José M. Estebaranz, andG. García-Alcaine. Bell-Kochen-Speckertheorem: A proof with 18 vectors. PhysicsLetters A, 212(4):183–187, 1996. D O I :10.1016/0375-9601(96)00134-X. URL http:
//dx.doi.org/10.1016/0375-9601(96)
00134-X; and Adán Cabello. Kochen-Specker theorem and experimental test onhidden variables. International Journalof Modern Physics, A 15(18):2813–2820,2000. D O I : 10.1142/S0217751X00002020.URL http://dx.doi.org/10.1142/
S0217751X00002020
tion, consider a Greechie (orthogonality) diagram of a finite subset of the
continuum of blocks or contexts embeddable in four-dimensional real
Hilbert space without a two-valued probability measure The proof of the
Kochen-Specker theorem uses nine tightly interconnected contexts a =A,B ,C ,D, b = D ,E ,F ,G, c = G , H , I , J , d = J ,K ,L, M , e = M , N ,O,P ,
f = P ,Q,R, A, g = B , I ,K ,R, h = C ,E ,L, N , i = F , H ,O,Q consisting of
the 18 projectors associated with the one dimensional subspaces spanned
by the vectors from the origin (0,0,0,0) to A = (0,0,1,−1), B = (1,−1,0,0),
C = (1,1,−1,−1), D = (1,1,1,1), E = (1,−1,1,−1), F = (1,0,−1,0), G =(0,1,0,−1), H = (1,0,1,0), I = (1,1,−1,1), J = (−1,1,1,1), K = (1,1,1,−1),
L = (1,0,0,1), M = (0,1,−1,0), N = (0,1,1,0), O = (0,0,0,1), P = (1,0,0,0),
Q = (0,1,0,0), R = (0,0,1,1), respectively. Greechie diagrams represent
atoms by points, and contexts by maximal smooth, unbroken curves.
In a proof by contradiction,note that, on the one hand, every observable
proposition occurs in exactly two contexts. Thus, in an enumeration of the
four observable propositions of each of the nine contexts, there appears
to be an even number of true propositions, provided that the value of an
observable does not depend on the context (i.e. the assignment is non-
contextual). Yet, on the other hand, as there is an odd number (actually
nine) of contexts, there should be an odd number (actually nine) of true
propositions.
b
5
Tensors
What follows is a “corollary,” or rather an expansion and extension, of what
has been presented in the previous chapter; in particular, with regards to
dual vector spaces (page 46), and the tensor product (page 52).
5.1 Notation
Let us consider the vector space Rn of dimension n; a basis B= e1,e2, . . . ,en
consisting of n basis vectors ei , and k arbitrary vectors x1,x2, . . . ,xk ∈ Rn ;
the vector xi having the vector components X i1, X i
2, . . . , X ik ∈R.
Please note again that, just like any tensor (field), the tensor product
z = x⊗y has three equivalent representations:
(i) as the scalar coordinates X i Y j with respect to the basis in which the
vectors x and y have been defined and coded; this form is often used in
the theory of (general) relativity;
(ii) as the quasi-matrix zi j = X i Y j , whose components zi j are defined
with respect to the basis in which the vectors x and y have been defined
and coded; this form is often used in classical (as compared to quan-
tum) mechanics and electrodynamics;
(iii) as a quasi-vector or “flattened matrix” defined by the Kronecker
product z = (X 1y, X 2y, . . . , X n y) = (X 1Y 1, . . . , X 1Y n , . . . , X nY 1, . . . , X nY n).
Again, the scalar coordinates X i Y j are defined with respect to the basis
in which the vectors x and y have been defined and coded. This latter
form is often used in (few-partite) quantum mechanics.
In all three cases, the pairs X i Y j are properly represented by distinct math-
ematical entities.
Tensor fields define tensors in every point of Rn separately. In general,
with respect to a particular basis, the components of a tensor field depend
on the coordinates.
We adopt Einstein’s summation convention to sum over equal indices (a
pair with a superscript and a subscript). Sometimes, sums are written out
90 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
explicitly.
In what follows, the notations “x · y”, “(x, y)” and “⟨x | y⟩” will be used
synonymously for the scalar product or inner product. Note, however,
that the “dot notation x · y” may be a little bit misleading; for example,
in the case of the “pseudo-Euclidean” metric represented by the matrix
diag(+,+,+, · · · ,+,−), it is no more the standard Euclidean dot product
diag(+,+,+, · · · ,+,+).
For a more systematic treatment, see for instance Klingbeil’s or Dirschmid’s
introductions 1. 1 Ebergard Klingbeil. Tensorrechnungfür Ingenieure. Bibliographisches In-stitut, Mannheim, 1966; and Hans JörgDirschmid. Tensoren und Felder. Springer,Vienna, 19965.2 Multilinear form
A multilinear form
α :Vk 7→R or C (5.1)
is a map from (multiple) arguments xi which are elements of some vector
space V into some scalars in R or C, satisfying
α(x1,x2, . . . , Ay+Bz, . . . ,xk ) = Aα(x1,x2, . . . ,y, . . . ,xk )
+Bα(x1,x2, . . . ,z, . . . ,xk )(5.2)
for every one of its (multi-)arguments.
In what follows we shall concentrate on real-valued multilinear forms
which map k vectors in Rn into R.
5.3 Covariant tensors
Let xi = ∑nji=1 X ji
i e ji = X jii e ji be some vector in (i.e., some element of) an
n–dimensional vector space V labelled by an index i . A tensor of rank k
α :Vk 7→R (5.3)
is a multilinear form
α(x1,x2, . . . ,xk ) =n∑
i1=1
n∑i2=1
· · ·n∑
ik=1X i1
1 X i22 . . . X ik
k α(ei1 ,ei2 , . . . ,eik ). (5.4)
The
Ai1i2···ik
def= α(ei1 ,ei2 , . . . ,eik ) (5.5)
are the components or coordinates of the tensor α with respect to the basis
B.
Note that a tensor of type (or rank) k in n-dimensional vector space has
nk coordinates.
T E N S O R S 91
To prove that tensors are multilinear forms, insert
α(x1,x2, . . . , Ax1j +Bx2
j , . . . ,xk )
=n∑
i1=1
n∑i2=1
· · ·n∑
ik=1X ii
1 X i22 . . . [A(X 1)
i j
j +B(X 2)i j
j ] . . . X ikk α(ei1 ,ei2 , . . . ,ei j , . . . ,eik )
= An∑
i1=1
n∑i2=1
· · ·n∑
ik=1X ii
1 X i22 . . . (X 1)
i j
j . . . X ikk α(ei1 ,ei2 , . . . ,ei j , . . . ,eik )
+Bn∑
i1=1
n∑i2=1
· · ·n∑
ik=1X ii
1 X i22 . . . (X 2)
i j
j . . . X ikk α(ei1 ,ei2 , . . . ,ei j , . . . ,eik )
= Aα(x1,x2, . . . ,x1j , . . . ,xk )+Bα(x1,x2, . . . ,x2
j , . . . ,xk )
5.3.1 Basis transformations
Let B and B′ be two arbitrary bases of Rn . Then every vector e′i of B′ can
be represented as linear combination of basis vectors from B:
e′i =n∑
j=1ai
j e j , i = 1, . . . ,n. (5.6)
Consider an arbitrary vector x ∈ Rn with components X i with respect to
the basis B and X ′i with respect to the basis B′:
x =n∑
i=1X i ei =
n∑i=1
X ′i e′i . (5.7)
Insertion into (5.6) yields
x =n∑
i=1X i ei =
n∑i=1
X ′i e′i =n∑
i=1X ′i n∑
j=1ai
j e j
=n∑
i=1
[n∑
j=1ai
j X ′i]
e j =n∑
j=1
[n∑
i=1ai
j X ′i]
e j =n∑
i=1
[n∑
j=1a j
i X ′ j
]ei .
(5.8)
A comparison of coefficient (and a renaming of the indices i ↔ j ) yields the
transformation laws of vector components
X j =n∑
i=1ai
j X ′i . (5.9)
The matrix a = aij is called the transformation matrix. In terms of the
coordinates X j , it can be expressed as
aij = X j
X ′i , (5.10)
assuming that the coordinate transformations are linear. If the basis trans-
formations involve nonlinear coordinate changes – such as from the Carte-
sian to the polar or spherical coordinates discussed later – we have to
employ
d X j =n∑
i=1ai
j d X ′i , (5.11)
92 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
as well as
aij = ∂X j
∂X ′i . (5.12)
A similar argument using
ei =n∑
j=1a′
ij e′j , i = 1, . . . ,n (5.13)
yields the inverse transformation laws
X ′ j =n∑
i=1a′
ij X i . (5.14)
Thereby,
ei =n∑
j=1a′
ij e′j =
n∑j=1
a′i
jn∑
k=1a j
k ek =n∑
j=1
n∑k=1
[a′i
j a jk ]ek , (5.15)
which, due to the linear independence of the basis vectors ei of B, is only
satisfied if
a′i
j a jk = δk
i or a′a = I. (5.16)
That is, a′ is the inverse matrix of a. In terms of the coordinates X j , it can
be expressed as [see also the Jacobian matrix Ji j defined in Eq. 5.49]
a′i
j = X ′ j
X i(5.17)
for linear coordinate transformations and
d X ′ j =n∑
i=1a′
ij d X i , (5.18)
as well as
a′i
j = ∂X ′ j
∂X i(5.19)
else.
5.3.2 Transformation of tensor components
Because of multilinearity and by insertion into (5.6),
α(e′j1,e′j2
, . . . ,e′jk) =α
(n∑
i1=1a j1
i1 ei1 ,n∑
i2=1a j2
i2 ei2 , . . . ,n∑
ik=1a jk
ik eik
)
=n∑
i1=1
n∑i2=1
· · ·n∑
ik=1a j1
i1 a j2i2 · · ·a jk
ikα(ei1 ,ei2 , . . . ,eik ) (5.20)
or
A′j1 j2··· jk
=n∑
i1=1
n∑i2=1
· · ·n∑
ik=1a j1
i1 a j2i2 · · ·a jk
ik Ai1i2...ik . (5.21)
T E N S O R S 93
5.4 Contravariant tensors
5.4.1 Definition of contravariant basis
Consider again a covariant basis B = e1,e2, . . . ,en consisting of n basis
vectors ei . Just as on page 47 earlier, we shall define a contravariant basis
B∗ = e1,e2, . . . ,en consisting of n basis vectors ei by the requirement that
the scalar product obeys
δji = ei ·e j ≡ (ei ,e j ) ≡ ⟨ei | e j ⟩ =
1 if i = j
0 if i 6= j. (5.22)
To distinguish elements of the two bases, the covariant vectors are
denoted by subscripts, whereas the contravariant vectors are denoted by
superscripts. The last terms ei · e j ≡ (ei ,e j ) ≡ ⟨ei | e j ⟩ recall different
notations of the scalar product.
Again, note that (the coordinates of) the dual basis vectors of an or-
thonormal basis can be coded identically as (the coordinates of) the orig-
inal basis vectors; that is, in this case, (the coordinates of) the dual basis
vectors are just rearranged as the transposed form of the original basis
vectors.
The entire tensor formalism developed so far can be transferred and
applied to define contravariant tensors as multinear forms
β :V∗k 7→R (5.23)
by
β(x1,x2, . . . ,xk ) =n∑
i1=1
n∑i2=1
· · ·n∑
ik=1Ξ1
i1Ξ2
i2. . .Ξk
ikβ(ei1 ,ei2 , . . . ,eik ). (5.24)
The
B i1i2···ik =β(ei1 ,ei2 , . . . ,eik ) (5.25)
are the components of the contravariant tensor β with respect to the basis
B∗.
More generally, suppose V is an n-dimensional vector space, and B =f1, . . . , fn is a basis of V; if gi j is the metric tensor, the dual basis is defined
by
g ( f ∗i , f j ) = g ( f i , f j ) = δi
j , (5.26)
where again δij is Kronecker delta function, which is defined
δi j =0 for i 6= j ,
1 for i = j .(5.27)
regardless of the order of indices, and regardless of whether these indices
represent covariance and contravariance.
94 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
5.4.2 Connection between the transformation of covariant and contravari-
ant entities
Because of linearity, we can make the formal Ansatz
e′ j =∑i
bij ei , (5.28)
where[bi
j ] = b is the transformation matrix associated with the con-
travariant basis. How is b related to a, the transformation matrix associ-
ated with the covariant basis?
By exploiting (5.22) one can find the connection between the trans-
formation of covariant and contravariant basis elements and thus tensor
components; that is,
δji = e′i ·e′ j = (ai
k ek ) · (blj el ) = ai
k blj ek ·el = ai
k bljδl
k = aik bk
j , (5.29)
and thus
b = a−1 = a′, and e′ j =∑i
(a−1)ijei =∑
ia′
ij ei . (5.30)
The argument concerning transformations of covariant tensors and com-
ponents can be carried through to the contravariant case. Hence, the
contravariant components transform as
β(e′ j1 ,e′ j2 , . . . ,e′ jk ) =β(
n∑i1=1
a′i1
j1 ei1 ,n∑
i2=1a′
i2
j2 ei2 , . . . ,n∑
ik=1a′
ik
jk eik
)
=n∑
i1=1
n∑i2=1
· · ·n∑
ik=1a′
i1
j1 a′i2
j2 · · ·a′ik
jkβ(ei1 ,ei2 , . . . ,eik ) (5.31)
or
B ′ j1 j2··· jk =n∑
i1=1
n∑i2=1
· · ·n∑
ik=1a′
i1
j1 a′i2
j2 · · ·a′ik
jk B i1i2...ik . (5.32)
5.5 Orthonormal bases
For orthonormal bases of n-dimensional Hilbert space,
δji = ei ·e j if and only if ei = ei for all 1 ≤ i , j ≤ n. (5.33)
Therefore, the vector space and its dual vector space are “identical” in
the sense that the coordinate tuples representing their bases are identical
(though relatively transposed). That is, besides transposition, the two bases
are identical
B≡B∗ (5.34)
and formally any distinction between covariant and contravariant vectors
becomes irrelevant. Conceptually, such a distinction persists, though. In
this sense, we might “forget about the difference between covariant and
contravariant orders.”
T E N S O R S 95
5.6 Invariant tensors and physical motivation
5.7 Metric tensor
Metric tensors are defined in metric vector spaces. A metric vector space
(sometimes also refered to as “vector space with metric” or “geometry”) is
a vector space with some inner or scalar product. This includes (pseudo-)
Euclidean spaces with indefinite metric. (I.e., the distance needs not be
positive or zero.)
5.7.1 Definition metric
A metric g is a functional Rn ×Rn 7→Rwith the following properties:
• g is symmetric; that is, g (x, y) = g (y , x);
• g is bilinear; that is, g (αx +βy , z) =αg (x, z)+βg (y , z) (due to symmetry
g is also bilinear in the second argument);
• g is nondegenerate; that is, for every x ∈V, x 6= 0, there exists a y ∈V
such that g (x, y) 6= 0.
5.7.2 Construction of a metric from a scalar product by metric tensor
In particular cases, the metric tensor may be defined via the scalar product
gi j = ei ·e j ≡ (ei ,e j ) ≡ ⟨ei | e j ⟩. (5.35)
and
g i j = ei ·e j ≡ (ei ,e j ) ≡ ⟨ei | e j ⟩. (5.36)
By definition of the (dual) basis in Eq. (4.32) on page 47,
g ij = ei e j = g i l el ·e j = g i l gl j = δi
j , (5.37)
which is a reflection of the covariant and contravariant metric tensors
being inverse, since the basis and the associated dual basis is inverse (and
vice versa). Note that it is possible to change a covariant tensor into a
contravariant one and vice versa by the application of a metric tensor. This
can be seen as follows. Because of linearity, any contravariant basis vector
ei can be written as a linear sum of covariant (transposed, but we do not
mark transposition here) basis vectors:
ei = Ai j e j . (5.38)
Then,
g i k = ei ·ek = (Ai j e j ) ·ek = Ai j (e j ·ek ) = Ai jδkj = Ai k (5.39)
and thus
ei = g i j e j (5.40)
96 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
and
ei = gi j e j . (5.41)
For orthonormal bases, the metric tensor can be represented as a Kro-
necker delta function, and thus remains form invariant. Moreover, its
covariant and contravariant components are identical; that is, δi j = δij =
δji = δi j .
5.7.3 What can the metric tensor do for us?
Most often it is used to raise or lower the indices; that is, to change from
contravariant to covariant and conversely from covariant to contravariant.
For example,
x = X i ei = X i gi j e j = X j e j , (5.42)
and hence X j = X i gi j .
In the previous section, the metric tensor has been derived from the
scalar product. The converse is true as well. In Euclidean space with the
dot (scalar, inner) product the metric tensor represents the scalar product
between vectors: let x = X i ei ∈ Rn and y = Y j e j ∈ Rn be two vectors. Then
("T " stands for the transpose),
x ·y ≡ (x,y) ≡ ⟨x | y⟩ = X i ei ·Y j e j = X i Y j ei ·e j = X i Y j gi j = X T g Y . (5.43)
It also characterizes the length of a vector: in the above equation, set
y = x. Then,
x ·x ≡ (x,x) ≡ ⟨x | x⟩ = X i X j gi j ≡ X T g X , (5.44)
and thus
||x|| =√
X i X j gi j =√
X T g X . (5.45)
The square of an infinitesimal vector d s = dxi is
(d s)2 = gi j dxi dx j = dxT g dx. (5.46)
Question: Prove that ||x|| mediated by g is indeed a metric; that is, that
g represents a bilinear functional g (x,y) = xi y j gi j that is symmetric; that
is, g (x,y) = g (x,y) and nondegenerate; that is, for any nonzero vector x ∈V,
x 6= 0, there is some vector y ∈V, so that g (x,y) 6= 0.
5.7.4 Transformation of the metric tensor
Insertion into the definitions and coordinate transformations (5.13) as well
as (5.17) yields
gi j = ei ·e j = a′i
l e′l ·a′j
m e′m = a′i
l a′j
m e′l ·e′m = a′i
l a′j
m g ′lm = ∂X ′l
∂X i
∂X ′m
∂X jg ′
lm .
(5.47)
T E N S O R S 97
Conversely, (5.6) as well as (5.10) yields
g ′i j = e′i ·e′j = ai
l el ·a jm em = ai
l a jm el ·em = ai
l a jm gl m = ∂X l
∂X ′i∂X m
∂X ′ jglm .
(5.48)
If the geometry (i.e., the basis) is locally orthonormal, glm = δlm , then
g ′i j = ∂X l
∂X ′i∂Xl
∂X ′ j .
In terms of the Jacobian matrix
J ≡ Ji j = ∂X ′i
∂X j≡
∂X ′1∂X 1 · · · ∂X ′1
∂X n
.... . .
...∂X ′n∂X 1 · · · ∂X ′n
∂X n
, (5.49)
the metric tensor in Eq. (5.47) can be rewritten as
g = J T g ′ J ≡ gi j = Jl i Jm j g ′lm . (5.50)
If the manifold is embedded into an Euclidean space, then g ′l m = δlm and
g = J T J .
The metric tensor and the Jacobian (determinant) are thus related by
det g = (det J T )(det g ′)(det J ). (5.51)
5.7.5 Examples
In what follows a few metrics are enumerated and briefly commented. For
a more systematic treatment, see, for instance, Snapper and Troyer’s Metric
Affine geometry 2. 2 Ernst Snapper and Robert J. Troyer. MetricAffine Geometry. Academic Press, NewYork, 1971
n-dimensional Euclidean space
g ≡ gi j = diag(1,1, . . . ,1︸ ︷︷ ︸n times
) (5.52)
One application in physics is quantum mechanics, where n stands
for the dimension of a complex Hilbert space. Some definitions can be
easily adopted to accommodate the complex numbers. E.g., axiom 5 of the
scalar product becomes (x, y) = (x, y), where “(x, y)” stands for complex
conjugation of (x, y). Axiom 4 of the scalar product becomes (x,αy) =α(x, y).
Lorentz plane
g ≡ gi j = diag(1,−1) (5.53)
Minkowski space of dimension n
In this case the metric tensor is called the Minkowski metric and is often
denoted by “η”:
η≡ ηi j = diag(1,1, . . . ,1︸ ︷︷ ︸n−1 times
,−1) (5.54)
98 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
One application in physics is the theory of special relativity, where
D = 4. Alexandrov’s theorem states that the mere requirement of the
preservation of zero distance (i.e., lightcones), combined with bijectivity
(one-to-oneness) of the transformation law yields the Lorentz transforma-
tions 3. 3 A. D. Alexandrov. On Lorentz transfor-mations. Uspehi Mat. Nauk., 5(3):187,1950; A. D. Alexandrov. A contribution tochronogeometry. Canadian Journal ofMath., 19:1119–1128, 1967; A. D. Alexan-drov. Mappings of spaces with families ofcones and space-time transformations.Annali di Matematica Pura ed Applicata,103:229–257, 1975. ISSN 0373-3114.D O I : 10.1007/BF02414157. URL http:
//dx.doi.org/10.1007/BF02414157;A. D. Alexandrov. On the principles of rel-ativity theory. In Classics of Soviet Mathe-matics. Volume 4. A. D. Alexandrov. SelectedWorks, pages 289–318. 1996; H. J. Borchersand G. C. Hegerfeldt. The structure ofspace-time transformations. Communica-tions in Mathematical Physics, 28(3):259–266, 1972. URL http://projecteuclid.
org/euclid.cmp/1103858408; WalterBenz. Geometrische Transformationen.BI Wissenschaftsverlag, Mannheim, 1992;June A. Lester. Distance preserving trans-formations. In Francis Buekenhout,editor, Handbook of Incidence Geome-try, pages 921–944. Elsevier, Amsterdam,1995; and Karl Svozil. Conventions inrelativity theory and quantum mechan-ics. Foundations of Physics, 32:479–502,2002. D O I : 10.1023/A:1015017831247.URL http://dx.doi.org/10.1023/A:
1015017831247
Negative Euclidean space of dimension n
g ≡ gi j = diag(−1,−1, . . . ,−1︸ ︷︷ ︸n times
) (5.55)
Artinian four-space
g ≡ gi j = diag(+1,+1,−1,−1) (5.56)
General relativity
In general relativity, the metric tensor g is linked to the energy-mass dis-
tribution. There, it appears as the primary concept when compared to the
scalar product. In the case of zero gravity, g is just the Minkowski metric
(often denoted by “η”) diag(1,1,1,−1) corresponding to “flat” space-time.
The best known non-flat metric is the Schwarzschild metric
g ≡
(1−2m/r )−1 0 0 0
0 r 2 0 0
0 0 r 2 sin2θ 0
0 0 0 − (1−2m/r )
(5.57)
with respect to the spherical space-time coordinates r ,θ,φ, t .
Computation of the metric tensor of the ball
Consider the transformation from the standard orthonormal three-
dimensional “Cartesian” coordinates X1 = x, X2 = y , X3 = z, into spherical
coordinates (for a definition of spherical coordinates, see also page 269)
X ′1 = r , X ′
2 = θ, X ′3 =ϕ. In terms of r ,θ,ϕ, the Cartesian coordinates can be
written as
X1 = r sinθcosϕ≡ X ′1 sin X ′
2 cos X ′3,
X2 = r sinθ sinϕ≡ X ′1 sin X ′
2 sin X ′3,
X3 = r cosθ ≡ X ′1 cos X ′
2.
(5.58)
Furthermore, since we are dealing with the Cartesian orthonormal basis,
gi j = δi j ; hence finally
g ′i j =
∂X l
∂X ′i∂Xl
∂X ′ j≡ diag(1,r 2,r 2 sin2θ), (5.59)
T E N S O R S 99
and
(d s)2 = (dr )2 + r 2(dθ)2 + r 2 sin2θ(dϕ)2. (5.60)
The expression (d s)2 = (dr )2 + r 2(dϕ)2 for polar coordinates in two
dimensions (i.e., n = 2) is obtained by setting θ =π/2 and dθ = 0.
Computation of the metric tensor of the Moebius strip
The parameter representation of the Moebius strip is
Φ(u, v) =
(1+ v cos u2 )sinu
(1+ v cos u2 )cosu
v sin u2
, (5.61)
where u ∈ [0,2π] represents the position of the point on the circle, and
where 2a > 0 is the “width” of the Moebius strip, and where v ∈ [−a, a].
Φv = ∂Φ
∂v=
cos u2 sinu
cos u2 cosu
sin u2
Φu = ∂Φ
∂u=
−12 v sin u
2 sinu + (1+ v cos u
2
)cosu
− 12 v sin u
2 cosu − (1+ v cos u
2
)sinu
12 v cos u
2
(5.62)
(∂Φ
∂v)T ∂Φ
∂u=
cos u2 sinu
cos u2 cosu
sin u2
T −
12 v sin u
2 sinu + (1+ v cos u
2
)cosu
− 12 v sin u
2 cosu − (1+ v cos u
2
)sinu
12 v cos u
2
=−1
2
(cos
u
2sin2 u
)v sin
u
2− 1
2
(cos
u
2cos2 u
)v sin
u
2
+1
2sin
u
2v cos
u
2= 0
(5.63)
(∂Φ
∂v)T ∂Φ
∂v=
cos u2 sinu
cos u2 cosu
sin u2
T cos u
2 sinu
cos u2 cosu
sin u2
= cos2 u
2sin2 u +cos2 u
2cos2 u + sin2 u
2= 1
(5.64)
100 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
(∂Φ
∂u)T ∂Φ
∂u=−
12 v sin u
2 sinu + (1+ v cos u
2
)cosu
− 12 v sin u
2 cosu − (1+ v cos u
2
)sinu
12 v cos u
2
T
·
·
−12 v sin u
2 sinu + (1+ v cos u
2
)cosu
− 12 v sin u
2 cosu − (1+ v cos u
2
)sinu
12 v cos u
2
= 1
4v2 sin2 u
2sin2 u +cos2 u +2v cos2 u cos
u
2+ v2 cos2 u cos2 u
2
+1
4v2 sin2 u
2cos2 u + sin2 u +2v sin2 u cos
u
2+ v2 sin2 u cos2 u
2
+1
4v2 cos2 1
2u = 1
4v2 + v2 cos2 u
2+1+2v cos
1
2u
=(1+ v cos
u
2
)2+ 1
4v2
(5.65)
Thus the metric tensor is given by
g ′i j =
∂X s
∂X ′i∂X t
∂X ′ jgst = ∂X s
∂X ′i∂X t
∂X ′ jδst
≡(Φu ·Φu Φv ·Φu
Φv ·Φu Φv ·Φv
)= diag
((1+ v cos
u
2
)2+ 1
4v2,1
).
(5.66)
5.8 General tensor
A (general) Tensor T can be defined as a multilinear form on the r -fold
product of a vector space V, times the s-fold product of the dual vector
space V∗; that is,
T : (V)r × (V∗)s =V×·· ·×V︸ ︷︷ ︸
r copies
×V∗×·· ·×V∗︸ ︷︷ ︸s copies
7→ F, (5.67)
where, most commonly, the scalar field Fwill be identified with the set
R of reals, or with the set C of complex numbers. Thereby, r is called the
covariant order, and s is called the contravariant order of T . A tensor of
covariant order r and contravariant order s is then pronounced a tensor
of type (or rank) (r , s). By convention, covariant indices are denoted by
subscripts, whereas the contravariant indices are denoted by superscripts.
With the standard, “inherited” addition and scalar multiplication, the
set Tsr of all tensors of type (r , s) forms a linear vector space.
Note that a tensor of type (1,0) is called a covariant vector , or just a
vector. A tensor of type (0,1) is called a contravariant vector.
Tensors can change their type by the invocation of the metric tensor.
That is, a covariant tensor (index) i can be made into a contravariant ten-
sor (index) j by summing over the index i in a product involving the tensor
T E N S O R S 101
and g i j . Likewise, a contravariant tensor (index) i can be made into a co-
variant tensor (index) j by summing over the index i in a product involving
the tensor and gi j .
Under basis or other linear transformations, covariant tensors with
index i transform by summing over this index with (the transformation
matrix) aij . Contravariant tensors with index i transform by summing over
this index with the inverse (transformation matrix) (a−1)ij.
5.9 Decomposition of tensors
Although a tensor of type (or rank) n transforms like the tensor product of
n tensors of type 1, not all type-n tensors can be decomposed into a single
tensor product of n tensors of type (or rank) 1.
Nevertheless, by a generalized Schmidt decomposition (cf. page 81), any
type-2 tensor can be decomposed into the sum of tensor products of two
tensors of type 1.
5.10 Form invariance of tensors
A tensor (field) is form invariant with respect to some basis change if its
representation in the new basis has the same form as in the old basis.
For instance, if the “12122–component” T12122(x) of the tensor T with
respect to the old basis and old coordinates x equals some function f (x)
(say, f (x) = x2), then, a necessary condition for T to be form invariant
is that, in terms of the new basis, that component T ′12122(x ′) equals the
same function f (x ′) as before, but in the new coordinates x ′. A sufficient
condition for form invariance of T is that all coordinates or components of
T are form invariant in that way.
Although form invariance is a gratifying feature for the reasons ex-
plained shortly, a tensor (field) needs not necessarily be form invariant
with respect to all or even any (symmetry) transformation(s).
A physical motivation for the use of form invariant tensors can be given
as follows. What makes some tuples (or matrix, or tensor components in
general) of numbers or scalar functions a tensor? It is the interpretation
of the scalars as tensor components with respect to a particular basis. In
another basis, if we were talking about the same tensor, the tensor com-
ponents; that is, the numbers or scalar functions, would be different.
Pointedly stated, the tensor coordinates represent some encoding of a
multilinear function with respect to a particular basis.
Formally, the tensor coordinates are numbers; that is, scalars, which
are grouped together in vector touples or matrices or whatever form we
consider useful. As the tensor coordinates are scalars, they can be treated
as scalars. For instance, due to commutativity and associativity, one can
exchange their order. (Notice, though, that this is generally not the case for
102 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
differential operators such as ∂i = ∂/∂xi .)
A form invariant tensor with respect to certain transformations is a
tensor which retains the same functional form if the transformations are
performed; that is, if the basis changes accordingly. That is, in this case,
the functional form of mapping numbers or coordinates or other entities
remains unchanged, regardless of the coordinate change. Functions re-
main the same but with the new parameter components as arguement. For
instance; 4 7→ 4 and f (X1, X2, X3) 7→ f (X ′1, X ′
2, X ′3).
Furthermore, if a tensor is invariant with respect to one transformation,
it need not be invariant with respect to another transformation, or with
respect to changes of the scalar product; that is, the metric.
Nevertheless, totally symmetric (antisymmetric) tensors remain totally
symmetric (antisymmetric) in all cases:
Ai1i2...is it ...ik = Ai1i2...it is ...ik
implies
A′j1i2... js jt ... jk
= a j1i1 a j2
i2 · · ·a jsis a jt
it · · ·a jkik Ai1i2...is it ...ik
= a j1i1 a j2
i2 · · ·a jsis a jt
it · · ·a jkik Ai1i2...it is ...ik
= a j1i1 a j2
i2 · · ·a jtit a js
is · · ·a jkik Ai1i2...it is ...ik
= A′j1i2... jt js ... jk
.
Likewise,
Ai1i2...is it ...ik =−Ai1i2...it is ...ik
implies
A′j1i2... js jt ... jk
= a j1i1 a j2
i2 · · ·a jsis a jt
it · · ·a jkik Ai1i2...is it ...ik
=−a j1i1 a j2
i2 · · ·a jsis a jt
it · · ·a jkik Ai1i2...it is ...ik
=−a j1i1 a j2
i2 · · ·a jtit a js
is · · ·a jkik Ai1i2...it is ...ik .
=−A′j1i2... jt js ... jk
.
In physics, it would be nice if the natural laws could be written into a
form which does not depend on the particular reference frame or basis
used. Form invariance thus is a gratifying physical feature, reflecting the
symmetry against changes of coordinates and bases.
After all, physicists want the formalization of their fundamental laws
not to artificially depend on, say, spacial directions, or on some particu-
lar basis, if there is no physical reason why this should be so. Therefore,
physicists tend to be crazy to write down everything in a form invariant
manner.
One strategy to accomplish form invariance is to start out with form
invariant tensors and compose – by tensor products and index reduction –
everything from them. This method guarantees form invarince.
T E N S O R S 103
Indeed, for the sake of demonstration, consider the following two fac-
torizable tensor fields: while
S(x) =(
x2
−x1
)⊗
(x2
−x1
)T
= (x2,−x1)T ⊗ (x2,−x1) ≡(
x22 −x1x2
−x1x2 x21
)(5.68)
is a form invariant tensor field with respect to the basis (0,1), (1,0) and
orthogonal transformations (rotations around the origin)(cosϕ sinϕ
−sinϕ cosϕ
), (5.69)
T (x) =(
x2
x1
)⊗
(x2
x1
)T
= (x2, x1)T ⊗ (x2, x1) ≡(
x22 x1x2
x1x2 x21
)(5.70)
is not.
This can be proven by considering the single factors from which S and
T are composed. Eqs. (5.20)-(5.21) and (5.31)-(5.32) show that the form in-
variance of the factors implies the form invariance of the tensor products.
For instance, in our example, the factors (x2,−x1)T of S are invariant, as
they transform as(cosϕ sinϕ
−sinϕ cosϕ
)(x2
−x1
)=
(x2 cosϕ−x1 sinϕ
−x2 sinϕ−x1 cosϕ
)=
(x ′
2
−x ′1
),
where the transformation of the coordinates(x ′
1
x ′2
)=
(cosϕ sinϕ
−sinϕ cosϕ
)(x1
x2
)=
(x1 cosϕ+x2 sinϕ
−x1 sinϕ+x2 cosϕ
)
has been used.
Note that the notation identifying tensors of type (or rank) two with
matrices, creates an “artifact” insofar as the transformation of the “second
index” must then be represented by the exchanged multiplication order,
together with the transposed transformation matrix; that is,
ai k a j l Akl = ai k Akl a j l = ai k Akl(aT )
l j ≡ a · A ·aT .
Thus for a transformation of the transposed touple (x2,−x1) we must
consider the transposed transformation matrix arranged after the factor;
that is,
(x2,−x1)
(cosϕ −sinϕ
sinϕ cosϕ
)= (
x2 cosϕ−x1 sinϕ,−x2 sinϕ−x1 cosϕ)= (
x ′2,−x ′
1
).
In contrast, a similar calculation shows that the factors (x2, x1)T of T do
not transform invariantly. However, noninvariance with respect to certain
transformations does not imply that T is not a valid, “respectable” tensor
field; it is just not form invariant under rotations.
104 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Nevertheless, note that, while the tensor product of form invariant
tensors is again a form invariant tensor, not every form invariant tensor
might be decomposed into products of form invariant tensors.
Let |+⟩ ≡ (0,1) and |−⟩ ≡ (1,0). For a nondecomposable tensor, consider
the sum of two-partite tensor products (associated with two “entangled”
particles) Bell state (cf. Eq. (A.28) on page 273) in the standard basis
|Ψ−⟩ = 1p2
(|+−⟩−|−+⟩)
≡(0,
1p2
,− 1p2
,0
)
≡ 1
2
0 0 0 0
0 1 −1 0
0 −1 1 0
0 0 0 0
.
(5.71)
|Ψ−⟩, together with the other threeBell states |Ψ+⟩ = 1p
2(|+−⟩+|−+⟩),
|Φ+⟩ = 1p2
(|−−⟩+|++⟩), and
|Φ−⟩ = 1p2
(|−−⟩−|++⟩), forms an or-
thonormal basis of C4.
Why is |Ψ−⟩ not decomposable? In order to be able to answer this ques-
tion (see alse Section 4.9.2 on page 52), consider the most general two-
partite state
|ψ⟩ =ψ−−|−−⟩+ψ−+|−+⟩+ψ+−|+−⟩+ψ++|++⟩, (5.72)
with ψi j ∈ C, and compare it to the most general state obtainable through
products of single-partite states |φ1⟩ = α−|−⟩+α+|+⟩, and |φ2⟩ = β−|−⟩+β+|+⟩ with αi ,βi ∈C; that is,
|φ⟩ = |φ1⟩|φ2⟩= (α−|−⟩+α+|+⟩)(β−|−⟩+β+|+⟩)
=α−β−|−−⟩+α−β+|−+⟩+α+β−|+−⟩+α+β+|++⟩.(5.73)
Since the two-partite basis states
|−−⟩ ≡ (1,0,0,0),
|−+⟩ ≡ (0,1,0,0),
|+−⟩ ≡ (0,0,1,0),
|++⟩ ≡ (0,0,0,1)
(5.74)
are linear independent (indeed, orthonormal), a comparison of |ψ⟩ with
|φ⟩ yields
ψ−− =α−β−,
ψ−+ =α−β+,
ψ+− =α+β−,
ψ++ =α+β+.
(5.75)
Hence, ψ−−/ψ−+ = β−/β+ =ψ+−/ψ++, and thus a necessary and sufficient
condition for a two-partite quantum state to be decomposable into a
T E N S O R S 105
product of single-particle quantum states is that its amplitudes obey
ψ−−ψ++ =ψ−+ψ+−. (5.76)
This is not satisfied for the Bell state |Ψ−⟩ in Eq. (5.71), because in this case
ψ−− =ψ++ = 0 and ψ−+ = −ψ+− = 1/p
2. Such nondecomposability is in
physics referred to as entanglement 4. 4 Erwin Schrödinger. Discussion ofprobability relations between sepa-rated systems. Mathematical Pro-ceedings of the Cambridge Philosoph-ical Society, 31(04):555–563, 1935a.D O I : 10.1017/S0305004100013554.URL http://dx.doi.org/10.1017/
S0305004100013554; Erwin Schrödinger.Probability relations between sepa-rated systems. Mathematical Pro-ceedings of the Cambridge Philosoph-ical Society, 32(03):446–452, 1936.D O I : 10.1017/S0305004100019137.URL http://dx.doi.org/10.1017/
S0305004100019137; and ErwinSchrödinger. Die gegenwärtige Situa-tion in der Quantenmechanik. Natur-wissenschaften, 23:807–812, 823–828,844–849, 1935b. D O I : 10.1007/BF01491891,10.1007/BF01491914, 10.1007/BF01491987.URL http://dx.doi.org/10.1007/
BF01491891,http://dx.doi.
org/10.1007/BF01491914,http:
//dx.doi.org/10.1007/BF01491987
Note also that |Ψ−⟩ is a singlet state, as it is form invariant under the
following generalized rotations in two-dimensional complex Hilbert sub-
space; that is, (if you do not believe this please check yourself)
|+⟩ = e i ϕ2
(cos
θ
2|+′⟩− sin
θ
2|−′⟩
),
|−⟩ = e−i ϕ2
(sin
θ
2|+′⟩+cos
θ
2|−′⟩
) (5.77)
in the spherical coordinates θ,ϕ defined on page 269, but it cannot be
composed or written as a product of a single (let alone form invariant)
two-partite tensor product.
In order to prove form invariance of a constant tensor, one has to trans-
form the tensor according to the standard transformation laws (5.21) and
(5.25), and compare the result with the input; that is, with the untrans-
formed, original, tensor. This is sometimes referred to as the “outer trans-
formation.”
In order to prove form invariance of a tensor field, one has to addition-
ally transform the spatial coordinates on which the field depends; that
is, the arguments of that field; and then compare. This is sometimes re-
ferred to as the “inner transformation.” This will become clearer with the
following example.
Consider again the tensor field defined earlier in Eq. (5.68), but let us
not choose the “elegant” ways of proving form invariance by factoring;
rather we explicitly consider the transformation of all the components
Si j (x1, x2) =(−x1x2 −x2
2
x21 x1x2
)
with respect to the standard basis (1,0), (0,1).
Is S form invariant with respect to rotations around the origin? That is, S
should be form invariant with repect to transformations x ′i = ai j x j with
ai j =(
cosϕ sinϕ
−sinϕ cosϕ
).
Consider the “outer” transformation first. As has been pointed out
earlier, the term on the right hand side in S′i j = ai k a j l Skl can be rewritten
as a product of three matrices; that is,
ai k a j l Skl (xn) = ai k Skl a j l = ai k Skl(aT )
l j ≡ a ·S ·aT .
106 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
aT stands for the transposed matrix; that is, (aT )i j = a j i .(cosϕ sinϕ
−sinϕ cosϕ
)(−x1x2 −x2
2
x21 x1x2
)(cosϕ −sinϕ
sinϕ cosϕ
)=
=(
−x1x2 cosϕ+x21 sinϕ −x2
2 cosϕ+x1x2 sinϕ
x1x2 sinϕ+x21 cosϕ x2
2 sinϕ+x1x2 cosϕ
)(cosϕ −sinϕ
sinϕ cosϕ
)=
=
cosϕ(−x1x2 cosϕ+x2
1 sinϕ)+ −sinϕ
(−x1x2 cosϕ+x21 sinϕ
)++sinϕ
(−x22 cosϕ+x1x2 sinϕ
) +cosϕ(−x2
2 cosϕ+x1x2 sinϕ)
cosϕ(x1x2 sinϕ+x2
1 cosϕ)+ −sinϕ
(x1x2 sinϕ+x2
1 cosϕ)+
+sinϕ(x2
2 sinϕ+x1x2 cosϕ) +cosϕ
(x2
2 sinϕ+x1x2 cosϕ)
=
=
x1x2(sin2ϕ−cos2ϕ
)+ 2x1x2 sinϕcosϕ
+(x2
1 −x22
)sinϕcosϕ −x2
1 sin2ϕ−x22 cos2ϕ
2x1x2 sinϕcosϕ+ −x1x2(sin2ϕ−cos2ϕ
)−+x2
1 cos2ϕ+x22 sin2ϕ −(
x21 −x2
2
)sinϕcosϕ
Let us now perform the “inner” transform
x ′i = ai j x j =⇒
x ′1 = x1 cosϕ+x2 sinϕ
x ′2 = −x1 sinϕ+x2 cosϕ.
Thereby we assume (to be corroborated) that the functional form in the
new coordinates are identical to the functional form of the old coordinates.
A comparison yields
−x ′1 x ′
2 = −(x1 cosϕ+x2 sinϕ
)(−x1 sinϕ+x2 cosϕ)=
= −(−x21 sinϕcosϕ+x2
2 sinϕcosϕ−x1x2 sin2ϕ+x1x2 cos2ϕ)=
= x1x2(sin2ϕ−cos2ϕ
)+ (x2
1 −x22
)sinϕcosϕ
(x ′1)2 = (
x1 cosϕ+x2 sinϕ)(
x1 cosϕ+x2 sinϕ)=
= x21 cos2ϕ+x2
2 sin2ϕ+2x1x2 sinϕcosϕ
(x ′2)2 = (−x1 sinϕ+x2 cosϕ
)(−x1 sinϕ+x2 cosϕ)=
= x21 sin2ϕ+x2
2 cos2ϕ−2x1x2 sinϕcosϕ
and hence
S′(x ′1, x ′
2) =(
−x ′1x ′
2 −(x ′2)2
(x ′1)2 x ′
1x ′2
)is invariant with respect to basis rotations
(cosϕ,−sinϕ), (sinϕ,cosϕ)
T E N S O R S 107
.
Incidentally, as has been stated earlier, S(x) can be written as the prod-
uct of two invariant tensors bi (x) and c j (x):
Si j (x) = bi (x)c j (x),
with b(x1, x2) = (−x2, x1), and c(x1, x2) = (x1, x2). This can be easily checked
by comparing the components:
b1c1 = −x1x2 = S11,
b1c2 = −x22 = S12,
b2c1 = x21 = S21,
b2c2 = x1x2 = S22.
Under rotations, b and c transform into
ai j b j =(
cosϕ sinϕ
−sinϕ cosϕ
)(−x2
x1
)=
(−x2 cosϕ+x1 sinϕ
x2 sinϕ+x1 cosϕ
)=
(−x ′
2
x ′1
)
ai j c j =(
cosϕ sinϕ
−sinϕ cosϕ
)(x1
x2
)=
(x1 cosϕ+x2 sinϕ
−x1 sinϕ+x2 cosϕ
)=
(x ′
1
x ′2
).
This factorization of S is nonunique, since Eq. (5.68) uses a different
factorization; also, S is decomposible into, for example,
S(x1, x2) =(
−x1x2 −x22
x21 x1x2
)=
(−x2
2
x1x2
)⊗
(x1
x2,1
).
5.11 The Kronecker symbol δ
For vector spaces of dimension n the totally symmetric Kronecker symbol
δ, sometimes referred to as the delta symbol δ–tensor, can be defined by
δi1i2···ik =
+1 if i1 = i2 = ·· · = ik
0 otherwise (that is, some indices are not identical).(5.78)
5.12 The Levi-Civita symbol ε
For vector spaces of dimension n the totally antisymmetric Levi-Civita
symbol ε, sometimes referred to as the Levi-Civita symbol ε–tensor, can be
defined by the number of permutations of its indices; that is,
εi1i2···ik =
+1 if (i1i2 . . . ik ) is an even permutation of (1,2, . . .k)
−1 if (i1i2 . . . ik ) is an odd permutation of (1,2, . . .k)
0 otherwise (that is, some indices are identical).
(5.79)
108 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Hence, εi1i2···ik stands for the the sign of the permutation in the case of a
permutation, and zero otherwise.
In two dimensions,
εi j ≡(ε11 ε12
ε21 ε22
)=
(0 1
−1 0
).
In three dimensional Euclidean space, the cross product, or vector
product of two vectors a ≡ ai and b ≡ bi can be written as a×b ≡ εi j k a j bk .
5.13 The nabla, Laplace, and D’Alembert operators
The nabla operator
∇=(∂
∂x1,∂
∂x2, . . . ,
∂
∂xn
). (5.80)
is a vector differential operator in an n-dimensional vector space V. In
index notation, ∇i = ∂i = ∂xi .
In three dimensions and in the standard Cartesian basis,
∇=(∂
∂x1,∂
∂x2,∂
∂x3
)= e1
∂
∂x1+e2
∂
∂x2+e3
∂
∂x3. (5.81)
It is often used to define basic differential operations; in particular,
(i) to denote the gradient of a scalar field f (x1, x2, x3) (rendering a vector
field with respect to a particular basis), (ii) the divergence of a vector field
v(x1, x2, x3) (rendering a scalar field with respect to a particular basis), and
(iii) the curl (rotation) of a vector field v(x1, x2, x3) (rendering a vector field
with respect to a particular basis) as follows:
grad f = ∇ f =(∂ f
∂x1,∂ f
∂x2,∂ f
∂x3
), (5.82)
div v = ∇·v = ∂v1
∂x1+ ∂v2
∂x2+ ∂v3
∂x3, (5.83)
rot v = ∇×v =(∂v3
∂x2− ∂v2
∂x3,∂v1
∂x3− ∂v3
∂x1,∂v2
∂x1− ∂v1
∂x2
)(5.84)
≡ εi j k∂ j vk . (5.85)
The Laplace operator is defined by
∆=∇2 =∇·∇= ∂2
∂2x1+ ∂2
∂2x2+ ∂2
∂2x3. (5.86)
In special relativity and electrodynamics, as well as in wave theory
and quantized field theory, with the Minkowski space-time of dimension
four (referring to the metric tensor with the signature “±,±,±,∓”), the
D’Alembert operator is defined by the Minkowski metric η= diag(1,1,1,−1)
2= ∂i∂i = ηi j∂
i∂ j =∇2− ∂2
∂2t=∇·∇− ∂2
∂2t= ∂2
∂2x1+ ∂2
∂2x2+ ∂2
∂2x3− ∂2
∂2t. (5.87)
T E N S O R S 109
5.14 Some tricks and examples
There are some tricks which are commonly used. Here, some of them are
enumerated:
(i) Indices which appear as internal sums can be renamed arbitrarily
(provided their name is not already taken by some other index). That is,
ai bi = a j b j for arbitrary a,b, i , j .
(ii) With the Euclidean metric, δi i = n.
(iii) ∂X i
∂X j = δij .
(iv) With the Euclidean metric, ∂X i
∂X i = n.
(v) For threedimensional vector spaces (n = 3) and the Euclidean metric,
the Grassmann identity holds:
εi j kεklm = δi lδ j m −δi mδ j l . (5.88)
(vi) For threedimensional vector spaces (n = 3) and the Euclidean metric,
|a×b| =√εi j kεi st a j as bk bt =
√|a|2|b|2 − (a ·b)2 =
√√√√det
(a ·a a ·b
a ·b b ·b
)=
|a||b|sinθab .
(vii) Let u, v ≡ X ′1, X ′
2 be two parameters associated with an orthonormal
Cartesian basis (0,1), (1,0), and letΦ : (u, v) 7→ R3 be a mapping from
some area of R2 into a twodimensional surface of R3. Then the metric
tensor is given by gi j = ∂Φk
∂X ′i∂Φm
∂X ′ j δkm .
Consider the following examples in three-dimensional vector space. Let
r 2 =∑3i=1 x2
i .
1.
∂ j r = ∂ j
√∑i
x2i = 1
2
1√∑i x2
i
2x j =x j
r (5.89)
By using the chain rule one obtains
∂ j rα =αrα−1 (∂ j r
)=αrα−1( x j
r
)=αrα−2x j (5.90)
and thus ∇rα =αrα−2x.
2.
∂ j logr = 1
r
(∂ j r
)(5.91)
With ∂ j r = x j
r derived earlier in Eq. (5.90) one obtains ∂ j logr = 1r
x j
r =x j
r 2 , and thus ∇ logr = xr 2 .
110 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
3.
∂ j
(∑i
(xi −ai )2
)− 12
+(∑
i(xi +ai )2
)− 12
=
=−1
2
1(∑i (xi −ai )2
) 32
2(x j −a j
)+ 1(∑i (xi +ai )2
) 32
2(x j +a j
)=
−(∑
i(xi −ai )2
)− 32 (
x j −a j)−(∑
i(xi +ai )2
)− 32 (
x j +a j)
.
(5.92)
4.
∇( r
r 3
)≡∂i
( ri
r 3
)= 1
r 3 ∂i ri︸︷︷︸=3
+ri
(−3
1
r 4
)(1
2r
)2ri = 3
1
r 3 −31
r 3 = 0.(5.93)
5. With the earlier solution (5.93) one obtains, for r 6= 0,
∆(1
r
)≡∂i∂i
1
r= ∂i
(− 1
r 2
)(1
2r
)2ri =−∂i
ri
r 3 = 0.(5.94)
6. With the earlier solution (5.93) one obtains
∆(rp
r 3
)≡∂i∂i
r j p j
r 3 = ∂i
[pi
r 3 + r j p j
(−3
1
r 5
)ri
]=
= pi
(−3
1
r 5
)ri +pi
(−3
1
r 5
)ri+
+r j p j
[(15
1
r 6
)(1
2r
)2ri
]ri + r j p j
(−3
1
r 5
)∂i ri︸︷︷︸=3
=
= ri pi1
r 5 (−3−3+15−9) = 0
(5.95)
7. With r 6= 0 and constant p one obtains Note that, in three dimensions, theGrassmann identity (5.88) εi j kεklm =δi lδ j m −δi mδ j l holds.∇× (p× r
r 3 ) ≡ εi j k∂ jεklm plrm
r 3 = plεi j kεklm
[∂ j
rm
r 3
]= plεi j kεklm
[1
r 3 ∂ j rm + rm
(−3
1
r 4
)(1
2r
)2r j
]= plεi j kεklm
[1
r 3 δ j m −3r j rm
r 5
]= pl (δi lδ j m −δi mδ j l )
[1
r 3 δ j m −3r j rm
r 5
]= pi
(3
1
r 3 −31
r 3
)︸ ︷︷ ︸
=0
−p j
(1
r 3 ∂ j ri︸︷︷︸=δi j
−3r j ri
r 5
)
=− p
r 3 +3
(rp
)r
r 5 .
(5.96)
T E N S O R S 111
8.
∇× (∇Φ)
≡ εi j k∂ j∂kΦ
= εi k j∂k∂ jΦ
= εi k j∂ j∂kΦ
=−εi j k∂ j∂kΦ= 0.
(5.97)
This is due to the fact that ∂ j∂k is symmetric, whereas εi j k ist totally
antisymmetric.
9. For a proof that (x×y)×z 6= x× (y×z) consider
(x×y)×z
≡ εi j lε j km xk ym zl
=−εi l jε j km xk ym zl
=−(δi kδl m −δi mδl k )xk ym zl
=−xi y ·z+ yi x ·z.
(5.98)
versus
x× (y×z)
≡ εi l jε j km xl yk zm
= (δi kδlm −δi mδlk )xl yk zm
= yi x ·z− zi x ·y.
(5.99)
10. Let w = pr with pi = pi
(t − r
c
), whereby t and c are constants. Then,
divw = ∇·w
≡ ∂i wi = ∂i
[1
rpi
(t − r
c
)]=
=(− 1
r 2
)(1
2r
)2ri pi + 1
rp ′
i
(−1
c
)(1
2r
)2ri =
= − ri pi
r 3 − 1
cr 2 p ′i ri .
Hence, divw =∇·w =−(
rpr 3 + rp′
cr 2
).
rotw = ∇×w
εi j k∂ j wk = ≡ εi j k
[(− 1
r 2
)(1
2r
)2r j pk +
1
rp ′
k
(−1
c
)(1
2r
)2r j
]=
= − 1
r 3 εi j k r j pk −1
cr 2 εi j k r j p ′k =
≡ − 1
r 3
(r×p
)− 1
cr 2
(r×p′) .
11. Let us verify some specific examples of Gauss’ (divergence) theorem,
stating that the outward flux of a vector field through a closed surface is
112 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
equal to the volume integral of the divergence of the region inside the
surface. That is, the sum of all sources subtracted by the sum of all sinks
represents the net flow out of a region or volume of threedimensional
space:
∫V
∇·wd v =∫F
w ·df. (5.100)
Consider the vector field w = (4x,−2y2, z2) and the (cylindric) volume
bounded by the planes z = 0 und z = 3, as well as by the surface x2 + y2 =4.
Let us first look at the left hand side∫V∇·wd v of Eq. (5.100):
∇w = div w = 4−4y +2z
=⇒∫V
div wd v =3∫
z=0
d z
2∫x=−2
d x
p4−x2∫
y=−p
4−x2
d y(4−4y +2z
)=
cylindric coordinates:
[ x = r cosϕ
y = r sinϕ
z = z
]
=3∫
z=0
d z
2∫0
r dr
2π∫0
dϕ(4−4r sinϕ+2z
)==
3∫z=0
d z
2∫0
r dr(4ϕ+4r cosϕ+2ϕz
)∣∣∣∣2π
ϕ=0=
=3∫
z=0
d z
2∫0
r dr (8π+4r +4πz −4r ) =
=3∫
z=0
d z
2∫0
r dr (8π+4πz)
= 2
(8πz +4π
z2
2
)∣∣∣∣z=3
z=0= 2(24+18)π= 84π
Now consider the right hand side∫F
w ·df of Eq. (5.100). The surface
consists of three parts: the lower plane F1 of the cylinder is character-
ized by z = 0; the upper plane F2 of the cylinder is characterized by z = 3;
the surface on the side of the zylinder F3 is characterized by x2 + y2 = 4.
T E N S O R S 113
df must be normal to these surfaces, pointing outwards; hence
F1 :∫
F 1
w ·df1 =∫
F1
4x
−2y2
z2 = 0
0
0
−1
d xd y = 0
F2 :∫
F2
w ·df2 =∫
F2
4x
−2y2
z2 = 9
0
0
1
d xd y =
= 9∫
Kr=2
d f = 9 ·4π= 36π
F3 :∫
F3
w ·df3 =∫
F3
4x
−2y2
z2
(∂x
∂ϕ× ∂x
∂z
)dϕd z (r = 2 = const.)
∂x
∂ϕ=
−r sinϕ
r cosϕ
0
=
−2sinϕ
2cosϕ
0
;∂x
∂z=
0
0
1
=⇒(∂x
∂ϕ× ∂x
∂z
)=
2cosϕ
2sinϕ
0
=⇒ F3 =2π∫
ϕ=0
dϕ
3∫z=0
d z
4 ·2cosϕ
−2(2sinϕ)2
z2
2cosϕ
2sinϕ
0
=
=2π∫
ϕ=0
dϕ
3∫z=0
d z(16cos2ϕ−16sin3ϕ
)== 3 ·16
2π∫ϕ=0
dϕ(cos2ϕ− sin3ϕ
)==
[ ∫cos2ϕdϕ = ϕ
2 + 14 sin2ϕ∫
sin3ϕdϕ = −cosϕ+ 13 cos3ϕ
]=
= 3 ·16
2π
2−
[(1+ 1
3
)−
(1+ 1
3
)]︸ ︷︷ ︸
=0
= 48π
For the flux through the surfaces one thus obtains∮F
w ·df = F1 +F2 +F3 = 84π.
12. Let us verify some specific examples of Stokes’ theorem in three di-
mensions, stating that ∫F
rot b ·df =∮
CF
b ·ds. (5.101)
114 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Consider the vector field b = (y z,−xz,0) and the (cylindric) volume
bounded by spherical cap formed by the plane at z = a/p
2 of a sphere
of radius a centered around the origin.
Let us first look at the left hand side∫F
rot b ·df of Eq. (5.101):
b =
y z
−xz
0
=⇒ rot b =∇×b =
x
y
−2z
Let us transform this into spherical coordinates:
x =
r sinθcosϕ
r sinθ sinϕ
r cosθ
⇒ ∂x
∂θ= r
cosθcosϕ
cosθ sinϕ
−sinθ
;∂x
∂ϕ= r
−sinθ sinϕ
sinθcosϕ
0
df =(∂x
∂θ× ∂x
∂ϕ
)dθdϕ= r 2
sin2θcosϕ
sin2θ sinϕ
sinθcosθ
dθdϕ
∇×b = r
sinθcosϕ
sinθ sinϕ
−2cosθ
∫F
rot b ·df =π/4∫
θ=0
dθ
2π∫ϕ=0
dϕa3
sinθcosϕ
sinθ sinϕ
−2cosθ
sin2θcosϕ
sin2θ sinϕ
sinθcosθ
=
= a3
π/4∫θ=0
dθ
2π∫ϕ=0
dϕ
[sin3θ
(cos2ϕ+ sin2ϕ
)︸ ︷︷ ︸=1
−2sinθcos2θ
]=
= 2πa3
π/4∫θ=0
dθ(1−cos2θ
)sinθ−2
π/4∫θ=0
dθ sinθcos2θ
=
= 2πa3
π/4∫θ=0
dθ sinθ(1−3cos2θ
)=[
transformation of variables:
cosθ = u ⇒ du =−sinθdθ⇒ dθ =− dusinθ
]
= 2πa3
π/4∫θ=0
(−du)(1−3u2)= 2πa3
(3u3
3−u
)∣∣∣∣π/4
θ=0=
= 2πa3 (cos3θ−cosθ
)∣∣∣∣π/4
θ=0= 2πa3
(2p
2
8−p
2
2
)=
= 2πa3
8
(−2
p2)=−πa3
p2
2
T E N S O R S 115
Now consider the right hand side∮
CF
b ·ds of Eq. (5.101). The radius r ′
of the circle surface (x, y , z) | x, y ∈ R, z = a/p
2 bounded by the sphere
with radius a is determined by a2 = (r ′)2 + a2 ; hence, r ′ = a/
p2. The
curve of integration CF can be parameterized by
(x, y , z) | x = ap2
cosϕ, y = ap2
sinϕ, z = ap2
.
Therefore,
x = a
1p2
cosϕ
1p2
sinϕ
1p2
= ap2
cosϕ
sinϕ
1
∈CF
Let us transform this into polar coordinates:
ds = dx
dϕdϕ= ap
2
−sinϕ
cosϕ
0
dϕ
b =
ap2
sinϕ · ap2
− ap2
cosϕ · ap2
0
= a2
2
sinϕ
−cosϕ
0
Hence the circular integral is given by∮
CF
b ·ds = a2
2
ap2
2π∫ϕ=0
(−sin2ϕ−cos2ϕ)︸ ︷︷ ︸
=−1
dϕ=− a3
2p
22π=−a3πp
2.
5.15 Some common misconceptions
5.15.1 Confusion between component representation and “the real thing”
Given a particular basis, a tensor is uniquely characterized by its compo-
nents. However, without reference to a particular basis, any components
are just blurbs.
Example (wrong!): a type-1 tensor (i.e., a vector) is given by (1,2).
Correct: with respect to the basis (0,1), (1,0), a rank-1 tensor (i.e., a
vector) is given by (1,2).
5.15.2 A matrix is a tensor
See the above section. Example (wrong!): A matrix is a tensor of type (or
rank) 2. Correct: with respect to the basis (0,1), (1,0), a matrix represents
a type-2 tensor. The matrix components are the tensor components.
Also, for non-orthogonal bases, covariant, contravariant, and mixed
tensors correspond to different matrices.
c
6
Projective and incidence geometry
P RO J E C T I V E G E O M E T RY is about the geometric properties that are invari-
ant under projective transformations. Incidence geometry is about which
points lie on which line.
6.1 Notation
In what follows, for the sake of being able to formally represent geometric
transformations as “quasi-linear” transformations and matrices, the co-
ordinates of n-dimensional Euclidean space will be augmented with one
additional coordinate which is set to one. For instance, in the plane R2, we
define new “three-componen” coordinates by
x =(
x1
x2
)≡
x1
x2
1
= X. (6.1)
In order to differentiate these new coordinates X from the usual ones x,
they will be written in capital letters.
6.2 Affine transformations
Affine transformations
f (x) =Ax+ t (6.2)
with the translation t, and encoded by a touple (t1, t2)T , and an arbitrary
linear transformation A encoding rotations, as well as dilatation and skew-
ing transformations and represented by an arbitrary matrix A, can be
“wrapped together” to form the new transformation matrix (“0T ” indicates
a row matrix with entries zero)
f=(
A t
0T 1
)≡
a11 a12 t1
a21 a22 t2
0 0 1
. (6.3)
118 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
As a result, the affine transformation f can be represented in the “quasi-
linear” form
f(X) = fX =(
A t
0T 1
)X. (6.4)
6.2.1 One-dimensional case
In one dimension, that is, for z ∈C, among the five basic operatios
(i) scaling: f(z) = r z for r ∈R,
(ii) translation: f(z) = z+w for w ∈C,
(iii) rotation: f(z) = e iϕz for ϕ ∈R,
(iv) complex conjugation: f(z) = z,
(v) inversion: f(z) = z−1,
there are three types of affine transformations (i)–(iii) which can be com-
bined.
6.3 Similarity transformations
Similarity transformations involve translations t, rotations R and a dilata-
tion r and can be represented by the matrix
(r R t
0T 1
)≡
m cosϕ −m sinϕ t1
m sinϕ m cosϕ t2
0 0 1
. (6.5)
6.4 Fundamental theorem of affine geometryFor a proof and further references, see
June A. Lester. Distance preservingtransformations. In Francis Buekenhout,editor, Handbook of Incidence Geometry,pages 921–944. Elsevier, Amsterdam, 1995
Any bijection from Rn , n ≥ 2, onto itself which maps all lines onto lines is
an affine transformation.
6.5 Alexandrov’s theoremFor a proof and further references, see
June A. Lester. Distance preservingtransformations. In Francis Buekenhout,editor, Handbook of Incidence Geometry,pages 921–944. Elsevier, Amsterdam, 1995
Consider the Minkowski space-timeMn ; that is, Rn , n ≥ 3, and the Minkowski
metric [cf. (5.54) on page 97] η ≡ ηi j = diag(1,1, . . . ,1︸ ︷︷ ︸n−1 times
,−1). Consider fur-
ther bijections f fromMn onto itself preserving light cones; that is for all
x,y ∈Mn ,
ηi j (xi − y i )(x j − y j ) = 0 if and only if ηi j (fi (x)− fi (y))(f j (x)− f j (y)) = 0.
Then f(x) is the product of a Lorentz transformation and a positive scale
factor.
b
7
Group theory
G RO U P T H E O RY is about transformations and symmetries.
7.1 Definition
A group is a set of objects G which satify the following conditions (or,
stated differently, axioms):
(i) closedness: There exists a composition rule “” such that G is closed
under any composition of elements; that is, the combination of any two
elements a,b ∈G results in an element of the group G.
(ii) associativity: for all a, b, and c in G, the following equality holds:
a (b c) = (a b) c;
(iii) identity (element): there exists an element of G, called the identity
(element) and denoted by I , such that for all a in G, a I = a.
(iv) inverse (element): for every a in G, there exists an element a−1 in G,
such that a−1 a = I .
(v) (optional) commutativity: if, for all a and b in G, the following equal-
ities hold: a b = b a, then the group G is called Abelian (group);
otherwise it is called non-Abelian (group).
A subgroup of a group is a subset which also satisfies the above axioms.
The order of a group is the number og distinct emements of that group.
In discussing groups one should keep in mind that there are two ab-
stract spaces involved:
(i) Representation space is the space of elements on which the group
elements – that is, the group transformations – act.
(ii) Group space is the space of elements of the group transformations.
Its dimension is the number of independent transformations which
120 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
the group is composed of. These independent elements – also called
the generators of the group – form a basis for all group elements. The
coordinates in this space are defined relative (in terms of) the (basis
elements, also called) generators. A continuous group can geometrically
be imagined as a linear space (e.g., a linear vector or matrix space)
continuous group linear space in which every point in this linear space is
an element of the group.
Suppose we can find a structure- and distiction-preseving mapping
U – that is, an injective mapping preserving the group operation – be-
tween elements of a group G and the groups of general either real or com-
plex non-singular matrices GL(n,R) or GL(n,C), respectively. Than this
mapping is called a representation of the group G. In particular, for this
U :G 7→ GL(n,R) or U :G 7→ GL(n,C),
U (a b) =U (a) ·U (b), (7.1)
for all a,b, a b ∈G.
Consider, for the sake of an example, the Pauli spin matrices which are
proportional to the angular momentum operators along the x, y , z-axis 1: 1 Leonard I. Schiff. Quantum Mechanics.McGraw-Hill, New York, 1955
σ1 =σx =(
0 1
1 0
),
σ2 =σy =(
0 −i
i 0
),
σ3 =σz =(
1 0
0 −1
).
(7.2)
Suppose these matrices σ1,σ2,σ3 serve as generators of a group. With
respect to this basis system of matrices σ1,σ2,σ3 a general point in group
in group space might be labelled by a three-dimensional vector with the
coordinates (x1, x2, x3) (relative to the basis σ1,σ2,σ3); that is,
x = x1σ1 +x2σ2 +x3σ3. (7.3)
If we form the exponential A(x) = ei2 x, we can show (no proof is given
here) that A(x) is a two-dimensional matrix representation of the group
SU(2), the special unitary group of degree 2 of 2×2 unitary matrices with
determinant 1.
7.2 Lie theory
7.2.1 Generators
We can generalize this examply by defining the generators of a continuous
group as the first coefficient of a Taylor expansion around unity; that is, if
G RO U P T H E O RY 121
the dimension of the group is n, and the Taylor expansion is
G(X) =n∑
i=1Xi Ti + . . . , (7.4)
then the matrix generator Ti is defined by
Ti = ∂G(X)
∂Xi
∣∣∣∣X=0
. (7.5)
7.2.2 Exponential map
There is an exponential connection exp : X 7→ G between a matrix Lie
group and the Lie algebra X generated by the generators Ti .
7.2.3 Lie algebra
A Lie algebra is a vector space X, together with a binary Lie bracket opera-
tion [·, ·] :X×X 7→X satisfying
(i) bilinearity;
(ii) antisymmetry: [X ,Y ] =−[Y , X ], in particular [X , X ] = 0;
(iii) the Jacobi identity: [X , [Y , Z ]]+ [Z , [X ,Y ]]+ [Y , [Z , X ]] = 0
for all X ,Y , Z ∈X.
7.3 Some important groups
7.3.1 General linear group GL(n,C)
The general linear group GL(n,C) contains all non-singular (i.e., invertible;
there exist an inverse) n ×n matrices with complex entries. The composi-
tion rule “” is identified with matrix multiplication (which is associative);
the neutral element is the unit matrix In = diag(1, . . . ,1︸ ︷︷ ︸n times
).
7.3.2 Orthogonal group O(n)
The orthogonal group O(n) 2 contains all orthogonal [i.e., A−1 = AT ] 2 F. D. Murnaghan. The Unitary and Rota-tion Groups. Spartan Books, Washington,D.C., 1962
n ×n matrices. The composition rule “” is identified with matrix mul-
tiplication (which is associative); the neutral element is the unit matrix
In = diag(1, . . . ,1︸ ︷︷ ︸n times
).
Because of orthogonality, only half of the off-diagonal entries are in-
dependent of one another; also the diagonal elements must be real; that
leaves us with the liberty of dimension n(n + 1)/2: (n2 −n)/2 complex
numbers from the off-diagonal elements, plus n reals from the diagonal.
122 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
7.3.3 Rotation group SO(n)
The special orthogonal group or, by another name, the rotation group
SO(n) contains all orthogonal n ×n matrices with unit determinant. SO(n)
is a subgroup of O(n)
The rotation group in two-dimensional configuration space SO(2) cor-
responds to planar rotations around the origin. It has dimension 1 corre-
sponding to one parameter θ. Its elements can be written as
R(θ) =(
cosθ sinθ
−sinθ cosθ
). (7.6)
7.3.4 Unitary group U(n)
The unitary group U(n) 3 contains all unitary [i.e., A−1 = A† = (A)T ] 3 F. D. Murnaghan. The Unitary and Rota-tion Groups. Spartan Books, Washington,D.C., 1962
n ×n matrices. The composition rule “” is identified with matrix mul-
tiplication (which is associative); the neutral element is the unit matrix
In = diag(1, . . . ,1︸ ︷︷ ︸n times
).
Because of unitarity, only half of the off-diagonal entries are indepen-
dent of one another; also the diagonal elements must be real; that leaves us
with the liberty of dimension n2: (n2 −n)/2 complex numbers from the off-
diagonal elements, plus n reals from the diagonal yield n2 real parameters.
Not that, for instance, U(1) is the set of complex numbers z = e iθ of unit
modulus |z|2 = 1. It forms an Abelian group.
7.3.5 Special unitary group SU(n)
The special unitary group SU(n) contains all unitary n ×n matrices with
unit determinant. SU(n) is a subgroup of U(n).
7.3.6 Symmetric group S(n)
The symmetric group S(n) on a finite set of n elements (or symbols) is the The symmetric group should not beconfused with a symmetry group.group whose elements are all the permutations of the n elements, and
whose group operation is the composition of such permutations. The iden-
tity is the identity permutation. The permutations are bijective functions
from the set of elements onto itself. The order (number of elements) of
S(n) is n!. Generalizing these groups to an infinite number of elements S∞is straightforward.
7.3.7 Poincaré group
The Poincaré group is the group of isometries – that is, bijective maps
preserving distances – in space-time modelled by R4 endowed with a scalar
product and thus of a norm induced by the Minkowski metric η ≡ ηi j =diag(1,1,1,−1) introduced in (5.54).
G RO U P T H E O RY 123
It has dimension ten (4+3+3 = 10), associated with the ten fundamental
(distance preserving) operations from which general isometries can be
composed: (i) translation through time and any of the three dimensions
of space (1+3 = 4), (ii) rotation (by a fixed angle) around any of the three
spatial axes (3), and a (Lorentz) boost, increasing the velocity in any of the
three spatial directions of two uniformly moving bodies (3).
The rotations and Lorentz boosts form the Lorentz group.
7.4 Cayley’s representation theorem
Cayley’s theorem states that every group G can be imbedded as – equiva-
lently, is isomorphic to – a subgroup of the symmetric group; that is, it is a
imorphic with some permutation group. In particular, every finite group
G of order n can be imbedded as – equivalently, is isomorphic to – a sub-
group of the symmetric group S(n).
Stated pointedly: permutations exhaust the possible structures of (fi-
nite) groups. The study of subgroups of the symmetric groups is no less
general than the study of all groups. No proof is given here. For a proof, see
Joseph J. Rotman. An Introduction to theTheory of Groups, volume 148 of Graduatetexts in mathematics. Springer, New York,fourth edition, 1995. ISBN 0387942858
]
Part III:
Functional analysis
8
Brief review of complex analysis
Recall 1 a passage of Musil’s “Verwirrungen des Zögling Törleß” , in which 1 Edmund Hlawka. Zum Zahlbegriff.Philosophia Naturalis, 19:413–470, 1982
German original(http://www.gutenberg.org/ebooks/34717):“In solch einer Rechnung sind am Anfangganz solide Zahlen, die Meter oderGewichte, oder irgend etwas anderesGreifbares darstellen können undwenigstens wirkliche Zahlen sind. Am Endeder Rechnung stehen ebensolche. Aber diesebeiden hängen miteinander durch etwaszusammen, das es gar nicht gibt. Ist dasnicht wie eine Brücke, von der nur Anfangs-und Endpfeiler vorhanden sind und dieman dennoch so sicher überschreitet, alsob sie ganz dastünde? Für mich hat so eineRechnung etwas Schwindliges; als ob es einStück des Weges weiß Gott wohin ginge.Das eigentlich Unheimliche ist mir aber dieKraft, die in solch einer Rechnung stecktund einen so festhalt, daß man doch wiederrichtig landet.”
the author (a mathematician educated in Vienna) states that, at the be-
ginning of any computation involving imaginary numbers are “solid”
numbers which could represent something measurable, like lengths or
weights, or something else tangible; or are at least real numbers. At the end
of the computation there are also such “solid” numbers. But the beginning
and the end of the computation are connected by something seemingly
nonexisting. Does this not appear, Musil’s Zögling Törleß wonders, like a
bridge crossing an abyss with only a bridge pier at the very beginning and
one at the very end, which could nevertheless be crossed with certainty
and securely, as if this bridge would exist entirely?
In what follows, a very brief review of complex analysis, or, by another
term, function theory, will be presented. For much more detailed intro-
ductions to complex analysis, including proofs, take, for instance, the
“classical” books 2, among a zillion of other very good ones 3. We shall
2 Eberhard Freitag and Rolf Busam. Funk-tionentheorie 1. Springer, Berlin, Heidel-berg, fourth edition, 1993,1995,2000,2006.English translation in ; E. T. Whittakerand G. N. Watson. A Course of Mod-ern Analysis. Cambridge UniversityPress, Cambridge, 4th edition, 1927.URL http://archive.org/details/
ACourseOfModernAnalysis. Reprintedin 1996. Table errata: Math. Comp. v. 36(1981), no. 153, p. 319; Robert E. Greeneand Stephen G. Krantz. Function theoryof one complex variable, volume 40 ofGraduate Studies in Mathematics. Amer-ican mathematical Society, Providence,Rhode Island, third edition, 2006; EinarHille. Analytic Function Theory. Ginn,New York, 1962. 2 Volumes; and Lars V.Ahlfors. Complex Analysis: An Introductionof the Theory of Analytic Functions of OneComplex Variable. McGraw-Hill Book Co.,New York, third edition, 19783 Klaus Jänich. Funktionentheorie.Eine Einführung. Springer, Berlin,Heidelberg, sixth edition, 2008. D O I :10.1007/978-3-540-35015-6. URL10.1007/978-3-540-35015-6; andDietmar A. Salamon. Funktionentheorie.Birkhäuser, Basel, 2012. D O I : 10.1007/978-3-0348-0169-0. URL http://dx.doi.
org/10.1007/978-3-0348-0169-0. seealso URL http://www.math.ethz.ch/ sala-mon/PREPRINTS/cxana.pdf
study complex analysis not only for its beauty, but also because it yields
very important analytical methods and tools; for instance for the solu-
tion of (differential) equations and the computation of definite integrals.
These methods will then be required for the computation of distributions
and Green’s functions, as well for the solution of differential equations of
mathematical physics – such as the Schrödinger equation.
One motivation for introducing imaginary numbers is the (if you per-
ceive it that way) “malady” that not every polynomial such as P (x) = x2 +1
has a root x – and thus not every (polynomial) equation P (x) = x2 +1 = 0
has a solution x – which is a real number. Indeed, you need the imaginary
unit i 2 = −1 for a factorization P (x) = (x + i )(x − i ) yielding the two roots
±i to achieve this. In that way, the introduction of imaginary numbers is
a further step towards omni-solvability. No wonder that the fundamen-
tal theorem of algebra, stating that every non-constant polynomial with
complex coefficients has at least one complex root – and thus total factoriz-
ability of polynomials into linear factors, follows!
If not mentioned otherwise, it is assumed that the Riemann surface,
128 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
representing a “deformed version” of the complex plane for functional
purposes, is simply connected. Simple connectedness means that the
Riemann surface it is path-connected so that every path between two
points can be continuously transformed, staying within the domain, into
any other path while preserving the two endpoints between the paths. In
particular, suppose that there are no “holes” in the Riemann surface; it is
not “punctured.”
Furthermore, let i be the imaginary unit with the property that i 2 =−1 is the solution of the equation x2 + 1 = 0. With the introduction of
imaginary numbers we can guarantee that all quadratic equations have
two roots (i.e., solutions).
By combining imaginary and real numbers, any complex number can be
defined to be some linear combination of the real unit number “1” with the
imaginary unit number i that is, z = 1× (ℜz)+ i × (ℑz), with the real valued
factors (ℜz) and (ℑz), respectively. By this definition, a complex number z
can be decomposed into real numbers x, y , r and ϕ such that
zdef= ℜz + iℑz = x + i y = r e iϕ, (8.1)
with x = r cosϕ and y = r sinϕ, where Euler’s formula
e iϕ = cosϕ+ i sinϕ (8.2)
has been used. If z = ℜz we call z a real number. If z = iℑz we call z a
purely imaginary number.
The modulus or absolute value of a complex number z is defined by
|z| def= +√
(ℜz)2 + (ℑz)2. (8.3)
Many rules of classical arithmetic can be carried over to complex arith-
metic 4. Note, however, that, for instance,p
ap
b = pab is only valid if at 4 Tom M. Apostol. Mathematical Analysis:
A Modern Approach to Advanced Calculus.Addison-Wesley Series in Mathematics.Addison-Wesley, Reading, MA, secondedition, 1974. ISBN 0-201-00288-4; andEberhard Freitag and Rolf Busam. Funktio-nentheorie 1. Springer, Berlin, Heidelberg,fourth edition, 1993,1995,2000,2006.English translation in
least one factor a or b is positive; hence −1 = i 2 = pip
i = p−1p−1 6=√
(−1)2 = 1. More generally, for two arbitrary numbers, u and v ,p
up
v is
not always equal top
uv .
Nevertheless,p|u|p|v | =p|uv |.
For many mathematicians Euler’s identity
e iπ =−1, or e iπ+1 = 0, (8.4)
is the “most beautiful” theorem 5. 5 David Wells. Which is the most beau-tiful? The Mathematical Intelligencer,10:30–31, 1988. ISSN 0343-6993. D O I :10.1007/BF03023741. URL http:
//dx.doi.org/10.1007/BF03023741
Euler’s formula (8.2) can be used to derive de Moivre’s formula for in-
teger n (for non-integer n the formula is multi-valued for different argu-
ments ϕ):
e i nϕ = (cosϕ+ i sinϕ)n = cos(nϕ)+ i sin(nϕ). (8.5)
It is quite suggestive to consider the complex numbers z, which are lin-
ear combinations of the real and the imaginary unit, in the complex plane
C = R×R as a geometric representation of complex numbers. Thereby,
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 129
the real and the imaginary unit are identified with the (orthonormal) basis
vectors of the standard (Cartesian) basis; that is, with the tuples
1 ≡ (1,0),
i ≡ (0,1).(8.6)
The addition and multiplication of two complex numbers represented by
(x, y) and (u, v) with x, y ,u, v ∈R are then defined by
(x, y)+ (u, v) = (x +u, y + v),
(x, y) · (u, v) = (xu − y v , xv + yu),(8.7)
and the neutral elements for addition and multiplication are (0,0) and
(1,0), respectively.
We shall also consider the extended plane C = C∪ ∞ consisting of the
entire complex plane C together with the point “∞” representing infinity.
Thereby, ∞ is introduced as an ideal element, completing the one-to-one
(bijective) mapping w = 1z , which otherwise would have no image at z = 0,
and no pre-image (argument) at w = 0.
8.1 Differentiable, holomorphic (analytic) function
Consider the function f (z) on the domain G ⊂ Domain( f ).
f is called differentiable at the point z0 if the differential quotient
d f
d z
∣∣∣∣z0
= f ′(z)∣∣
z0= ∂ f
∂x
∣∣∣∣z0
= 1
i
∂ f
∂y
∣∣∣∣z0
(8.8)
exists.
If f is differentiable in the domain G it is called holomorphic, or, used
synonymuously, analytic in the domain G .
8.2 Cauchy-Riemann equations
The function f (z) = u(z)+ i v(z) (where u and v are real valued functions)
is analytic or holomorph if and only if (ab = ∂a/∂b)
ux = vy , uy =−vx . (8.9)
For a proof, differentiate along the real, and then along the complex axis,
taking
f ′(z) = limx→0
f (z +x)− f (z)
x= ∂ f
∂x= ∂u
∂x+ i
∂v
∂x,
and f ′(z) = limy→0
f (z + i y)− f (z)
i y= ∂ f
∂i y=−i
∂ f
∂y=−i
∂u
∂y+ ∂v
∂y.
(8.10)
For f to be analytic, both partial derivatives have to be identical, and thus∂ f∂x = ∂ f
∂i y , or∂u
∂x+ i
∂v
∂x=−i
∂u
∂y+ ∂v
∂y. (8.11)
130 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
By comparing the real and imaginary parts of this equation, one obtains
the two real Cauchy-Riemann equations
∂u
∂x= ∂v
∂y,
∂v
∂x=−∂u
∂y.
(8.12)
8.3 Definition analytical function
If f is analytic in G , all derivatives of f exist, and all mixed derivatives are
independent on the order of differentiations. Then the Cauchy-Riemann
equations imply that
∂
∂x
(∂u
∂x
)= ∂
∂x
(∂v
∂y
)= ∂
∂y
(∂v
∂x
)=− ∂
∂y
(∂u
∂y
),
and∂
∂y
(∂v
∂y
)= ∂
∂y
(∂u
∂x
)= ∂
∂x
(∂u
∂y
)=− ∂
∂x
(∂v
∂x
),
(8.13)
and thus (∂2
∂x2 + ∂2
∂y2
)u = 0, and
(∂2
∂x2 + ∂2
∂y2
)v = 0 . (8.14)
If f = u + i v is analytic in G , then the lines of constant u and v are
orthogonal.
The tangential vectors of the lines of constant u and v in the two-
dimensional complex plane are defined by the two-dimensional nabla
operator ∇u(x, y) and ∇v(x, y). Since, by the Cauchy-Riemann equations
ux = vy and uy =−vx
∇u(x, y) ·∇v(x, y) =(
ux
uy
)·(
vx
vy
)= ux vx +uy vy = ux vx + (−vx )ux = 0
(8.15)
these tangential vectors are normal.
f is angle (shape) preserving conformal if and only if it is holomorphic
and its derivative is everywhere non-zero.
Consider an analytic function f and an arbitrary path C in the com-
plex plane of the arguments parameterized by z(t ), t ∈ R. The image of C
associated with f is f (C ) =C ′ : f (z(t )), t ∈R.
The tangent vector of C ′ in t = 0 and z0 = z(0) is
d
d tf (z(t ))
∣∣∣∣t=0
= d
d zf (z)
∣∣∣∣z0
d
d tz(t )
∣∣∣∣t=0
=λ0e iϕ0d
d tz(t )
∣∣∣∣t=0
. (8.16)
Note that the first term dd z f (z)
∣∣∣z0
is independent of the curve C and only
depends on z0. Therefore, it can be written as a product of a squeeze
(stretch) λ0 and a rotation e iϕ0 . This is independent of the curve; hence
two curves C1 and C2 passing through z0 yield the same transformation of
the image λ0e iϕ0 .
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 131
8.4 Cauchy’s integral theorem
If f is analytic on G and on its borders ∂G , then any closed line integral of f
vanishes ∮∂G
f (z)d z = 0 . (8.17)
No proof is given here.
In particular,∮
C⊂∂G f (z)d z is independent of the particular curve, and
only depends on the initial and the end points.
For a proof, subtract two line integral which follow arbitrary paths
C1 and C2 to a common initial and end point, and which have the same
integral kernel. Then reverse the integration direction of one of the line
integrals. According to Cauchy’s integral theorem the resulting integral
over the closed loop has to vanish.
Often it is useful to parameterize a contour integral by some form of∫C
f (z)d z =∫ b
af (z(t ))
d z(t )
d td t . (8.18)
Let f (z) = 1/z and C : z(ϕ) = Re iϕ, with R > 0 and −π<ϕ≤π. Then∮|z|=R
f (z)d z =∫ π
−πf (z(ϕ))
d z(ϕ)
dϕdϕ
=∫ π
−π1
Re iϕR i e iϕdϕ
=∫ π
−πiϕ
= 2πi
(8.19)
is independent of R.
8.5 Cauchy’s integral formula
If f is analytic on G and on its borders ∂G , then
f (z0) = 1
2πi
∮∂G
f (z)
z − z0d z . (8.20)
No proof is given here.
Note that because of Cauchy’s integral formula, analytic functions have
an integral representation. This might appear as not very exciting; alas it
has far-reaching consequences, because analytic functions have integral
representation, they have higher derivatives, which also have integral
representation. And, as a result, if a function has one complex derivative,
then it has infnitely many complex derivatives. This statement can be
expressed formally precisely by the generalized Cauchy’s integral formula
or, by another term, Cauchy’s differentiation formula states that if f is
analytic on G and on its borders ∂G , then
f (n)(z0) = n!2πi
∮∂G
f (z)
(z − z0)n+1 d z . (8.21)
132 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
No proof is given here.
Cauchy’s integral formula presents a powerful method to compute
integrals. Consider the following examples.
(i) First, let us calculate ∮|z|=3
3z +2
z(z +1)3 d z.
The kernel has two poles at z = 0 and z = −1 which are both inside the
domain of the contour defined by |z| = 3. By using Cauchy’s integral
formula we obtain for “small” ε ∮|z|=3
3z +2
z(z +1)3 d z
=∮|z|=ε
3z +2
z(z +1)3 d z +∮|z+1|=ε
3z +2
z(z +1)3 d z
=∮|z|=ε
3z +2
(z +1)3
1
zd z +
∮|z+1|=ε
3z +2
z
1
(z +1)3 d z
= 2πi
0![[
d 0
d z0 ]]3z +2
(z +1)3
∣∣∣∣z=0
+ 2πi
2!d 2
d z2
3z +2
z
∣∣∣∣z=−1
= 2πi
0!3z +2
(z +1)3
∣∣∣∣z=0
+ 2πi
2!d 2
d z2
3z +2
z
∣∣∣∣z=−1
= 4πi −4πi
= 0.
(8.22)
(ii) Consider
∮|z|=3
e2z
(z +1)4 d z
= 2πi
3!3!
2πi
∮|z|=3
e2z
(z − (−1))3+1 d z
= 2πi
3!d 3
d z3
∣∣e2z ∣∣z=−1
= 2πi
3!23 ∣∣e2z ∣∣
z=−1
= 8πi e−2
3.
(8.23)
Suppose g (z) is a function with a pole of order n at the point z0; that is
g (z) = f (z)
(z − z0)n (8.24)
where f (z) is an analytic function. Then,∮∂G
g (z)d z = 2πi
(n −1)!f (n−1)(z0) . (8.25)
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 133
8.6 Series representation of complex differentiable functions
As a consequence of Cauchy’s (generalized) integral formula, analytic
functions have power series representations.
For the sake of a proof, we shall recast the denominator z − z0 in
Cauchy’s integral formula (8.20) as a geometric series as follows (we shall
assume that |z0 −a| < |z −a|)1
z − z0= 1
(z −a)− (z0 −a)
= 1
(z −a)
[1
1− z0−az−a
]
= 1
(z −a)
[ ∞∑n=0
(z0 −a)n
(z −a)n
]=
∞∑n=0
(z0 −a)n
(z −a)n+1 .
(8.26)
By substituting this in Cauchy’s integral formula (8.20) and using Cauchy’s
generalized integral formula (8.21) yields an expansion of the analytical
function f around z0 by a power series
f (z0) = 1
2πi
∮∂G
f (z)
z − z0d z
= 1
2πi
∮∂G
f (z)∞∑
n=0
(z0 −a)n
(z −a)n+1 d z
=∞∑
n=0(z0 −a)n 1
2πi
∮∂G
f (z)
(z −a)n+1 d z
=∞∑
n=0
f n(z0)
n!(z0 −a)n .
(8.27)
8.7 Laurent series
Every function f which is analytic in a concentric region R1 < |z − z0| < R2
can in this region be uniquely written as a Laurent series
f (z) =∞∑
k=−∞(z − z0)k ak (8.28)
The coefficients ak are (the closed contour C must be in the concentric
region)
ak = 1
2πi
∮C
(χ− z0)−k−1 f (χ)dχ . (8.29)
The coefficient
Res( f (z0)) = a−1 = 1
2πi
∮C
f (χ)dχ (8.30)
is called the residue, denoted by “Res.”
134 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For a proof, as in Eqs. (8.26) we shall recast (a −b)−1 for |a| > |b| as a
geometric series
1
a −b= 1
a
(1
1− ba
)== 1
a
( ∞∑n=0
bn
an
)=
∞∑n=0
bn
an+1
[substitution n +1 →−k, n →−k −1k →−n −1] =−∞∑
k=−1
ak
bk+1,
(8.31)
and, for |a| < |b|,1
a −b=− 1
b −a=−
∞∑n=0
an
bn+1
[substitution n +1 →−k, n →−k −1k →−n −1] =−−∞∑
k=−1
bk
ak+1.
(8.32)
Furthermore since a +b = a − (−b), we obtain, for |a| > |b|,
1
a +b=
∞∑n=0
(−1)n bn
an+1 =−∞∑
k=−1(−1)−k−1 ak
bk+1=−
−∞∑k=−1
(−1)k ak
bk+1, (8.33)
and, for |a| < |b|,1
a +b=−
∞∑n=0
(−1)n+1 an
bn+1 =∞∑
n=0(−1)n an
bn+1
=−∞∑
k=−1(−1)−k−1 bk
ak+1=−
−∞∑k=−1
(−1)k bk
ak+1.
(8.34)
Suppose that some function f (z) is analytic in an annulus bounded by
the radius r1 and r2 > r1. By substituting this in Cauchy’s integral formula
(8.20) for an annulus bounded by the radius r1 and r2 > r1 (note that the
orientations of the boundaries with respect to the annulus are opposite,
rendering a relative factor “−1”) and using Cauchy’s generalized integral
formula (8.21) yields an expansion of the analytical function f around
z0 by the Laurent series for a point a on the annulus; that is, for a path
containing the point z around a circle with radius r1, |z − a| < |z0 − a|;likewise, for a path containing the point z around a circle with radius
r2 > a > r1, |z −a| > |z0 −a|,
f (z0) = 1
2πi
∮r1
f (z)
z − z0d z − 1
2πi
∮r2
f (z)
z − z0d z
= 1
2πi
[∮r1
f (z)∞∑
n=0
(z0 −a)n
(z −a)n+1 d z +∮
r2
f (z)−∞∑
n=−1
(z0 −a)n
(z −a)n+1 d z
]= 1
2πi
[ ∞∑n=0
(z0 −a)n∮
r1
f (z)
(z −a)n+1 d z +−∞∑
n=−1(z0 −a)n
∮r2
f (z)
(z −a)n+1 d z
]=
∞∑−∞
(z0 −a)n[
1
2πi
∮r1≤r≤r2
f (z)
(z −a)n+1 d z
].
(8.35)
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 135
Suppose that g (z) is a function with a pole of order n at the point z0;
that is g (z) = h(z)/(z − z0)n , where h(z) is an analytic function. Then the
terms k ≤−(n +1) vanish in the Laurent series. This follows from Cauchy’s
integral formula
ak = 1
2πi
∮c(χ− z0)−k−n−1h(χ)dχ= 0 (8.36)
for −k −n −1 ≥ 0.
Note that, if f has a simple pole (pole of order 1) at z0, then it can be
rewritten into f (z) = g (z)/(z − z0) for some analytic function g (z) = (z −z0) f (z) that remains after the singularity has been “split” from f . Cauchy’s
integral formula (8.20), and the residue can be rewritten as
a−1 = 1
2πi
∮∂G
g (z)
z − z0d z = g (z0). (8.37)
For poles of higher order, the generalized Cauchy integral formula (8.21)
can be used.
8.8 Residue theorem
Suppose f is analytic on a simply connected open subset G with the excep-
tion of finitely many (or denumerably many) points zi . Then,∮∂G
f (z)d z = 2πi∑zi
Res f (zi ) . (8.38)
No proof is given here.
The residue theorem presents a powerful tool for calculating integrals,
both real and complex. Let us first mention a rather general case of a situa-
tion often used. Suppose we are interested in the integral
I =∫ ∞
−∞R(x)d x
with rational kernel R; that is, R(x) = P (x)/Q(x), where P (x) and Q(x)
are polynomials (or can at least be bounded by a polynomial) with no
common root (and therefore factor). Suppose further that the degrees of
the polynomial is
degP (x) ≤ degQ(x)−2.
This condition is needed to assure that the additional upper or lower path
we want to add when completing the contour does not contribute; that is,
vanishes.
Now first let us analytically continue R(x) to the complex plane R(z);
that is,
I =∫ ∞
−∞R(x)d x =
∫ ∞
−∞R(z)d z.
Next let us close the contour by adding a (vanishing) path integral∫å
R(z)d z = 0
136 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
in the upper (lower) complex plane
I =∫ ∞
−∞R(z)d z +
∫å
R(z)d z =∮→&å
R(z)d z.
The added integral vanishes because it can be approximated by
∣∣∣∣∫å R(z)d z
∣∣∣∣≤ limr→∞
(const.
r 2 πr
)= 0.
With the contour closed the residue theorem can be applied for an
evaluation of I ; that is,
I = 2πi∑zi
ResR(zi )
for all singularities zi in the region enclosed by “→ & å. ”
Let us consider some examples.
(i) Consider
I =∫ ∞
−∞d x
x2 +1.
The analytic continuation of the kernel and the addition with vanish-
ing a semicircle “far away” closing the integration path in the upper
complex half-plane of z yields
I =∫ ∞
−∞d x
x2 +1
=∫ ∞
−∞d z
z2 +1
=∫ ∞
−∞d z
z2 +1+
∫å
d z
z2 +1
=∫ ∞
−∞d z
(z + i )(z − i )+
∫å
d z
(z + i )(z − i )
=∮
1
(z − i )f (z)d z with f (z) = 1
(z + i )
= 2πi Res
(1
(z + i )(z − i )
)∣∣∣∣z=+i
= 2πi f (+i )
= 2πi1
(2i )
=π.
(8.39)
Here, Eq. (8.37) has been used. Closing the integration path in the lower
complex half-plane of z yields (note that in this case the contour inte-
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 137
gral is negative because of the path orientation)
I =∫ ∞
−∞d x
x2 +1
=∫ ∞
−∞d z
z2 +1
=∫ ∞
−∞d z
z2 +1
∫lower path
d z
z2 +1
=∫ ∞
−∞d z
(z + i )(z − i )+
∫lower path
d z
(z + i )(z − i )
=∮
1
(z + i )f (z)d z with f (z) = 1
(z − i )
= 2πi Res
(1
(z + i )(z − i )
)∣∣∣∣z=−i
=−2πi f (−i )
= 2πi1
(2i )
=π.
(8.40)
(ii) Consider
F (p) =∫ ∞
−∞e i px
x2 +a2 d x
with a 6= 0.
The analytic continuation of the kernel yields
F (p) =∫ ∞
−∞e i pz
z2 +a2 d z =∫ ∞
−∞e i pz
(z − i a)(z + i a)d z.
Suppose first that p > 0. Then, if z = x + i y , e i pz = e i px e−py → 0 for
z →∞ in the upper half plane. Hence, we can close the contour in the
upper half plane and obtain F (p) with the help of the residue theorem.
If a > 0 only the pole at z = +i a is enclosed in the contour; thus we
obtain
F (p) = 2πi Rese i pz
(z + i a)
∣∣∣∣z=+i a
= 2πie i 2pa
2i a
= π
ae−pa .
(8.41)
If a < 0 only the pole at z = −i a is enclosed in the contour; thus we
138 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
obtain
F (p) = 2πi Rese i pz
(z − i a)
∣∣∣∣z=−i a
= 2πie−i 2pa
−2i a
= π
−ae−i 2pa
= π
−aepa .
(8.42)
Hence, for a 6= 0,
F (p) = π
|a|e−|pa|. (8.43)
For p < 0 a very similar consideration, taking the lower path for con-
tinuation – and thus aquiring a minus sign because of the “clockwork”
orientation of the path as compared to its interior – yields
F (p) = π
|a|e−|pa|. (8.44)
(iii) Not all singularities are “nice” poles. Consider
∮|z|=1
e1z d z.
That is, let f (z) = e1z and C : z(ϕ) = Re iϕ, with R = 1 and −π < ϕ ≤ π.
This function is singular only in the origin z = 0, but this is an essential
singularity near which the function exhibits extreme behavior. and can
be expanded into a Laurent series
f (z) = e1z =
∞∑l=0
1
l !
(1
z
)l
around this singularity. In such a case the residue can be found only by
using Laurent series of f (z); that is by comparing its coefficient of the
1/z term. Hence, Res(e
1z
)∣∣∣z=0
is the coefficient 1 of the 1/z term. The
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 139
residue is not, with z = e iϕ,
a−1 = Res(e
1z
)∣∣∣z=0
6= 1
2πi
∮C
e1z d z
= 1
2πi
∫ π
−πe
1eiϕ
d z(ϕ)
dϕdϕ
= 1
2πi
∫ π
−πe
1eiϕ i e iϕdϕ
= 1
2π
∫ π
−πe−eiϕ
e iϕdϕ
= 1
2π
∫ π
−πe−eiϕ+iϕdϕ
= 1
2π
∫ π
−πi
d
dϕe−eiϕ
dϕ
= i
2πe−eiϕ
∣∣∣∣π−π= 0.
(8.45)
Thus, by the residue theorem,∮|z|=1
e1z d z = 2πi Res
(e
1z
)∣∣∣z=0
= 2πi . (8.46)
For f (z) = e−1z , the same argument yields Res
(e−
1z
)∣∣∣z=0
=−1 and thus∮|z|=1 e−
1z d z =−2πi .
8.9 Multi-valued relationships, branch points and and branch
cuts
Suppose that the Riemann surface of is not simply connected.
Suppose further that f is a multi-valued function (or multifunction).
An argument z of the function f is called branch point if there is a closed
curve Cz around z whose image f (Cz ) is an open curve. That is, the multi-
function f is discontinuous in z. Intuitively speaking, branch points are the
points where the various sheets of a multifunction come together.
A branch cut is a curve (with ends possibly open, closed, or half-open)
in the complex plane across which an analytic multifunction is discontinu-
ous. Branch cuts are often taken as lines.
8.10 Riemann surface
Suppose f (z) is a multifunction. Then the various z-surfaces on which f (z)
is uniquely defined, together with their connections through branch points
and branch cuts, constitute the Riemann surface of f . The required leafs
are called Riemann sheet.
140 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
A point z of the function f (z) is called a branch point of order n if
through it and through the associated cut(s) n + 1 Riemann sheets are
connected.
8.11 Some special functional classes
The requirement that a function is holomorphic (analytic, differentiable)
puts some stringent conditions on its type, form, and on its behaviour. For
instance, let z0 ∈G the limit of a sequence zn ∈G , zn 6= z0. Then it can be
shown that, if two analytic functions f und g on the domain G coincide in
the points zn , then they coincide on the entire domain G .
8.11.1 Entire function
An function is said to be an entire function if it is defined and differentiable
(holomorphic, analytic) in the entire finite complex plane C.
An entire function may be either a rational function f (z) = P (z)/Q(z)
which can be written as the ratio of two polynomial functions P (z) and
Q(z), or it may be a transcendental function such as ez or sin z.
The Weierstrass factorization theorem states that an entire function can
be represented by a (possibly infinite 6) product involving its zeroes [i.e., 6 Theodore W. Gamelin. Complex Analysis.Springer, New York, NY, 2001the points zk at which the function vanishes f (zk ) = 0]. For example (for a
proof, see Eq. (6.2) of 7), 7 J. B. Conway. Functions of ComplexVariables. Volume I. Springer, New York,NY, 1973
sin z = z∞∏
k=1
[1−
( z
πk
)2]
. (8.47)
8.11.2 Liouville’s theorem for bounded entire function
Liouville’s theorem states that a bounded (i.e., its absolute square is finite
everywhere in C) entire function which is defined at infinity is a constant.
Conversely, a nonconstant entire function cannot be bounded.
No proof is presented here. It may (wrongly) appear that sin z isnonconstant and bounded. However it isonly bounded on the real axis; indeeed,sin i y = (1/2)(e y +e−y ) →∞ for y →∞.
For a proof, consider the integral representation of the derivative f ′(z)
of some bounded entire function f (z) < C (suppose the bound is C ) ob-
tained through Cauchy’s integral formula (8.21), taken along a circular path
with arbitrarily large radius r of length 2πr in the limit of infinite radius;
that is,
∣∣ f ′(z0)∣∣= ∣∣∣∣ 1
2πi
∮∂G
f (z)
(z − z0)2 d z
∣∣∣∣< 1
2πi
∮∂G
∣∣ f (z)∣∣
(z − z0)2 d z < 1
2πi2πr
C
r 2 = C
rr→∞−→ 0.
(8.48)
As a result, f (z0) = 0 and thus f = A ∈C.
B R I E F R E V I E W O F C O M P L E X A N A LY S I S 141
8.11.3 Picard’s theorem
Picard’s theorem states that any entire function that misses two or more
points f : C 7→ C− z1, z2, . . . is constant. Conversely, any nonconstant
entire function covers the entire complex plane C except a single point.
An example for a nonconstant entire function is ez which never reaches
the point 0.
8.11.4 Meromorphic function
If f has no singularities other than poles in the domain G it is called mero-
morphic in the domain G .
We state without proof (e.g., Theorem 8.5.1 of 8) that a function f 8 Einar Hille. Analytic Function Theory.Ginn, New York, 1962. 2 Volumeswhich is meromorphic in the extended plane is a rational function f (z) =
P (z)/Q(z) which can be written as the ratio of two polynomial functions
P (z) and Q(z).
8.12 Fundamental theorem of algebra
The factor theorem states that a polynomial f (z) in z of degree k has a
factor z − z0 if and only if f (z0) = 0, and can thus be written as f (z) =(z − z0)g (z), where g (z) is a polynomial in z of degree k − 1. Hence, by
iteration,
f (z) =αk∏
i=1(z − zi ) , (8.49)
where α ∈C.
No proof is presented here.
The fundamental theorem of algebra states that every polynomial (with
arbitrary complex coefficients) has a root [i.e. solution of f (z) = 0] in the
complex plane. Therefore, by the factor theorem, the number of roots of a
polynomial, up to multiplicity, equals its degree.
Again, no proof is presented here. https://www.dpmms.cam.ac.uk/ wtg10/ftalg.html
9
Brief review of Fourier transforms
9.0.1 Functional spaces
That complex continuous waveforms or functions are comprised of a num-
ber of harmonics seems to be an idea at least as old as the Pythagoreans. In
physical terms, Fourier analysis 1 attempts to decompose a function into 1 T. W. Körner. Fourier Analysis. CambridgeUniversity Press, Cambridge, UK, 1988;Kenneth B. Howell. Principles of Fourieranalysis. Chapman & Hall/CRC, BocaRaton, London, New York, Washington,D.C., 2001; and Russell Herman. Intro-duction to Fourier and Complex Analysiswith Applications to the Spectral Analysis ofSignals. University of North Carolina Wilm-ington, Wilmington, NC, 2010. URL http:
//people.uncw.edu/hermanr/mat367/
FCABook/Book2010/FTCA-book.pdf.Creative Commons Attribution-NoncommercialShare Alike 3.0 UnitedStates License
its constituent frequencies, known as a frequency spectrum. Thereby the
goal is the expansion of periodic and aperiodic functions into sine and co-
sine functions. Fourier’s observation or conjecture is, informally speaking,
that any “suitable” function f (x) can be expressed as a possibly infinite
sum (i.e. linear combination), of sines and cosines of the form
f (x) =∞∑
k=−∞[Ak cos(C kx)+Bk sin(C kx)] . (9.1)
Moreover, it is conjectured that any “suitable” function f (x) can be
expressed as a possibly infinite sum (i.e. linear combination), of exponen-
tials; that is,
f (x) =∞∑
k=−∞Dk e i kx . (9.2)
More generally, it is conjectured that any “suitable” function f (x) can
be expressed as a possibly infinite sum (i.e. linear combination), of other
(possibly orthonormal) functions gk (x); that is,
f (x) =∞∑
k=−∞γk gk (x). (9.3)
The bigger picture can then be viewed in terms of functional (vector)
spaces: these are spanned by the elementary functions gk , which serve
as elements of a functional basis of a possibly infinite-dimensional vec-
tor space. Suppose, in further analogy to the set of all such functions
G = ⋃k gk (x) to the (Cartesian) standard basis, we can consider these
elementary functions gk to be orthonormal in the sense of a generalized
functional scalar product [cf. also Section 14.5 on page 233; in particular
Eq. (14.85)]
⟨gk | gl ⟩ =∫ b
agk (x)gl (x)d x = δkl . (9.4)
144 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
One could arrange the coefficients γk into a tuple (an ordered list of ele-
ments) (γ1,γ2, . . .) and consider them as components or coordinates of a
vector with respect to the linear orthonormal functional basis G.
9.0.2 Fourier series
Suppose that a function f (x) is periodic in the interval [− L2 , L
2 ] with period
L. (Alternatively, the function may be only defined in this interval.) A func-
tion f (x) is periodic if there exist a period L ∈ R such that, for all x in the
domain of f ,
f (L+x) = f (x). (9.5)
Then, under certain “mild” conditions – that is, f must be piecewise
continuous and have only a finite number of maxima and minima – f can
be decomposed into a Fourier series
f (x) = a02 +∑∞
k=1
[ak cos
( 2πL kx
)+bk sin( 2π
L kx)]
, with
ak = 2L
L2∫
− L2
f (x)cos( 2π
L kx)
d x for k ≥ 0
bk = 2L
L2∫
− L2
f (x)sin( 2π
L kx)
d x for k > 0.
(9.6)
For proofs and additional information see§8.1 in
Kenneth B. Howell. Principles of Fourieranalysis. Chapman & Hall/CRC, BocaRaton, London, New York, Washington,D.C., 2001
For a (heuristic) proof, consider the Fourier conjecture (9.1), and com-
pute the coefficients Ak , Bk , and C .
First, observe that we have assumed that f is periodic in the interval
[− L2 , L
2 ] with period L. This should be reflected in the sine and cosine terms
of (9.1), which themselves are periodic functions in the interval [−π,π]
with period 2π. Thus in order to map the functional period of f into the
sines and cosines, we can “stretch/shrink” L into 2π; that is, C in Eq. (9.1) is
identified with
C = 2π
L. (9.7)
Thus we obtain
f (x) =∞∑
k=−∞
[Ak cos(
2π
Lkx)+Bk sin(
2π
Lkx)
]. (9.8)
Now use the following properties: (i) for k = 0, cos(0) = 1 and sin(0) = 0.
Thus, by comparing the coefficient a0 in (9.6) with A0 in (9.1) we obtain
A0 = a02 .
(ii) Since cos(x) = cos(−x) is an even function of x, we can rearrange the
summation by combining identical functions cos(− 2πL kx) = cos( 2π
L kx),
thus obtaining ak = A−k + Ak for k > 0.
(iii) Since sin(x) =−sin(−x) is an odd function of x, we can rearrange the
summation by combining identical functions sin(− 2πL kx) = −sin( 2π
L kx),
thus obtaining bk =−B−k +Bk for k > 0.
Having obtained the same form of the Fourier series of f (x) as exposed
in (9.6), we now turn to the derivation of the coefficients ak and bk . a0 can
B R I E F R E V I E W O F F O U R I E R T R A N S F O R M S 145
be derived by just considering the functional scalar product exposedin Eq.
(9.4) of f (x) with the constant identity function g (x) = 1; that is,
⟨g | f ⟩ = ∫ L2
− L2
f (x)d x
= ∫ L2
− L2
a02 +∑∞
n=1
[an cos
( nπxL
)+bn sin( nπx
L
)]d x
= a0L2 ,
(9.9)
and hence
a0 = 2
L
∫ L2
− L2
f (x)d x (9.10)
In a very similar manner, the other coefficients can be computed by
considering⟨
cos( 2π
L kx) | f (x)
⟩ ⟨sin
( 2πL kx
) | f (x)⟩
and exploiting the or-
thogonality relations for sines and cosines
∫ L2
− L2
sin( 2π
L kx)
cos( 2π
L l x)
d x = 0,∫ L2
− L2
cos( 2π
L kx)
cos( 2π
L l x)
d x = ∫ L2
− L2
sin( 2π
L kx)
sin( 2π
L l x)
d x = L2δkl .
(9.11)
For the sake of an example, let us compute the Fourier series of
f (x) = |x| =−x, for −π≤ x < 0,
+x, for 0 ≤ x ≤π.
First observe that L = 2π, and that f (x) = f (−x); that is, f is an even
function of x; hence bn = 0, and the coefficients an can be obtained by
considering only the integration between 0 and π.
For n = 0,
a0 = 1
π
π∫−π
d x f (x) = 2
π
π∫0
xd x =π.
For n > 0,
an = 1
π
π∫−π
f (x)cos(nx)d x = 2
π
π∫0
x cos(nx)d x =
= 2
π
sin(nx)
nx
∣∣∣∣π0−
π∫0
sin(nx)
nd x
= 2
π
cos(nx)
n2
∣∣∣∣π0=
= 2
π
cos(nπ)−1
n2 =− 4
πn2 sin2 nπ
2=
0 for even n
− 4
πn2 for odd n
Thus,
f (x) = π
2− 4
π
(cos x + cos3x
9+ cos5x
25+·· ·
)=
= π
2− 4
π
∞∑n=0
cos[(2n +1)x]
(2n +1)2 .
146 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
One could arrange the coefficients (a0, a1,b1, a2,b2, . . .) into a tuple (an
ordered list of elements) and consider them as components or coordinats
of a vector spanned by the linear independent sine and cosine functions
which serve as a basis of an infinite dimensional vector space.
9.0.3 Exponential Fourier series
Suppose again that a function is periodic in the interval [− L2 , L
2 ] with period
L. Then, under certain “mild” conditions – that is, f must be piecewise
continuous and have only a finite number of maxima and minima – f can
be decomposed into an exponential Fourier series
f (x) =∑∞k=−∞ ck e i kx , with
ck = 1L
∫ L2
− L2
f (x ′)e−i kx ′d x ′.
(9.12)
The expontial form of the Fourier series can be derived from the Fourier
series (9.6) by Euler’s formula (8.2), in particular, e i kϕ = cos(kϕ)+ i sin(kϕ),
and thus
cos(kϕ) = 1
2
(e i kϕ+e−i kϕ
), as well as sin(kϕ) = 1
2i
(e i kϕ−e−i kϕ
).
By comparing the coefficients of (9.6) with the coefficients of (9.12), we
obtainak = ck + c−k for k ≥ 0,
bk = i (ck − c−k ) for k > 0,(9.13)
or
ck =
12 (ak − i bk ) for k > 0,a02 for k = 0,
12 (a−k + i b−k ) for k < 0.
(9.14)
Eqs. (9.12) can be combined into
f (x) = 1
L
∞∑k=−∞
∫ L2
− L2
f (x ′)e−i k(x′−x)d x ′. (9.15)
9.0.4 Fourier transformation
Suppose we define ∆k = 2π/L, or 1/L = ∆k/2π. Then Eq. (9.15) can be
rewritten as
f (x) = 1
2π
∞∑k=−∞
∫ L2
− L2
f (x ′)e−i k(x′−x)d x ′∆k. (9.16)
Now, in the “aperiodic” limit L →∞ we obtain the Fourier transformation
and the Fourier inversion F−1[F [ f (x)]] =F [F−1[ f (x)]] = f (x) by
f (x) = 12π
∫ ∞−∞
∫ ∞−∞ f (x ′)e−i k(x′−x)d x ′dk, whereby
F−1[ f (k)] = f (x) =α∫ ∞−∞ f (k)e±i kx dk, and
F [ f (x)] = f (k) =β∫ ∞−∞ f (x ′)e∓i kx ′
d x ′.(9.17)
B R I E F R E V I E W O F F O U R I E R T R A N S F O R M S 147
F [ f (x)] = f (k) is called the Fourier transform of f (x). Per convention,
either one of the two sign pairs +− or −+ must be chosen. The factors α
and β must be chosen such that
αβ= 1
2π; (9.18)
that is, the factorization can be “spread evenly among α and β,” such that
α= β= 1/p
2π, or “unevenly,” such as, for instance, α= 1 and β= 1/2π, or
α= 1/2π and β= 1.
Most generally, the Fourier transformations can be rewritten (change of
integration constant), with arbitrary A,B ∈R, as
F−1[ f (k)](x) = f (x) = B∫ ∞−∞ f (k)e i Akx dk, and
F [ f (x)](k) = f (k) = A2πB
∫ ∞−∞ f (x ′)e−i Akx′
d x ′.(9.19)
The coice A = 2π and B = 1 renders a very symmetric form of (9.19);
more precisely,
F−1[ f (k)](x) = f (x) = ∫ ∞−∞ f (k)e2πi kx dk, and
F [ f (x)](k) = f (k) = ∫ ∞−∞ f (x ′)e−2πi kx ′
d x ′.(9.20)
For the sake of an example, assume A = 2π and B = 1 in Eq. (9.19),
therefore starting with (9.20), and consider the Fourier transform of the
Gaussian function
ϕ(x) = e−πx2. (9.21)
As a hint, notice that e−t 2is analytic in the region 0 ≤ Im t ≤ p
πk; also, as
will be shown in Eqs. (10.18), the Gaussian integral is∫ ∞
−∞e−t 2
d t =pπ . (9.22)
With A = 2π and B = 1 in Eq. (9.19), the Fourier transform of the Gaussian
function is
F [ϕ(x)](k) = ϕ(k) =∞∫
−∞e−πx2
e−2πi kx d x
[completing the exponent]
=∞∫
−∞e−πk2
e−π(x+i k)2d x
(9.23)
The variable transformation t = pπ(x + i k) yields d t/d x = p
π; thus
d x = d t/pπ, and
F [ϕ(x)](k) = ϕ(k) = e−πk2
pπ
+∞+ipπk∫
−∞+ipπk
e−t 2d t (9.24)
-
6
C-
Im t
Re t
+ipπk
-
?6
Figure 9.1: Integration path to compute theFourier transform of the Gaussian.
Let us rewrite the integration (9.24) into the Gaussian integral by con-
sidering the closed path whose “left and right pieces vanish;” moreover,
∮C
d te−t 2 =−∞∫
+∞e−t 2
d t ++∞+i
pπk∫
−∞+ipπk
e−t 2d t = 0, (9.25)
148 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
because e−t 2is analytic in the region 0 ≤ Im t ≤p
πk. Thus, by substituting
+∞+ipπk∫
−∞+ipπk
e−t 2d t =
+∞∫−∞
e−t 2d t , (9.26)
in (9.24) and by insertion of the valuepπ for the Gaussian integral, as
shown in Eq. (10.18), we finally obtain
F [ϕ(x)](k) = ϕ(k) = e−πk2
pπ
+∞∫−∞
e−t 2d t
︸ ︷︷ ︸pπ
= e−πk2. (9.27)
A very similar calculation yields
F−1[ϕ(k)](x) =ϕ(x) = e−πx2. (9.28)
Eqs. (9.27) and (9.28) establish the fact that the Gaussian function
ϕ(x) = e−πx2defined in (9.21) is an eigenfunction of the Fourier trans-
formations F and F−1 with associated eigenvalue 1. See Sect. 6.3 inRobert Strichartz. A Guide to Distribution
Theory and Fourier Transforms. CRC Press,Boca Roton, Florida, USA, 1994. ISBN0849382734
With a slightly different definition the Gaussian function f (x) = e−x2/2 is
also an eigenfunction of the operator
H =− d 2
d x2 +x2 (9.29)
corresponding to a harmonic oscillator. The resulting eigenvalue equation
is
H f (x) =[− d 2
d x2 +x2]
f (x) =[− d
d x(−x)+x2
]f (x) = f (x); (9.30)
with eigenvalue 1.
Instead of going too much into the details here, it may suffice to say that
the Hermite functions
hn(x) =π−1/4(2nn!)−1/2(
d
d x−x
)n
e−x2/2 =π−1/4(2nn!)−1/2Hn(x)e−x2/2
(9.31)
are all eigenfunctions of the Fourier transform with the eigenvalue i np
2π.
The polynomial Hn(x) of degree n is called Hermite polynomial. Her-
mite functions form a complete system, so that any function g (with∫ |g (x)|2d x <∞) has a Hermite expansion
g (x) =∞∑
k=0⟨g ,hn⟩hn(x). (9.32)
This is an example of an eigenfunction expansion.
10
Distributions as generalized functions
10.1 Heuristically coping with discontinuities
What follows are “recipes” and a “cooking course” for some “dishes” Heav-
iside, Dirac and others have enjoyed “eating,” alas without being able to
“explain their digestion” (cf. the citation by Heaviside on page 22).
Insofar theoretical physics is natural philosophy, the question arises
if physical entities need to be smooth and continuous – in particular, if
physical functions need to be smooth (i.e., in C∞), having derivatives of all
orders 1 (such as polynomials, trigonometric and exponential functions) 1 William F. Trench. Introduction to realanalysis. Free Hyperlinked Edition 2.01,2012. URL http://ramanujan.math.
trinity.edu/wtrench/texts/TRENCH_
REAL_ANALYSIS.PDF
– as “nature abhors sudden discontinuities,” or if we are willing to allow
and conceptualize singularities of different sorts. Other, entirely different,
scenarios are discrete 2 computer-generated universes 3. This little course2 Konrad Zuse. Discrete mathemat-ics and Rechnender Raum. 1994.URL http://www.zib.de/PaperWeb/
abstracts/TR-94-10/; and Konrad Zuse.Rechnender Raum. Friedrich Vieweg &Sohn, Braunschweig, 19693 Edward Fredkin. Digital mechanics. aninformational process based on reversibleuniversal cellular automata. Physica,D45:254–270, 1990. D O I : 10.1016/0167-2789(90)90186-S. URL http://dx.doi.
org/10.1016/0167-2789(90)90186-S;T. Toffoli. The role of the observer inuniform systems. In George J. Klir, editor,Applied General Systems Research, RecentDevelopments and Trends, pages 395–400.Plenum Press, New York, London, 1978;and Karl Svozil. Computational universes.Chaos, Solitons & Fractals, 25(4):845–859,2006a. D O I : 10.1016/j.chaos.2004.11.055.URL http://dx.doi.org/10.1016/j.
chaos.2004.11.055
is no place for preference and judgments regarding these matters. Let me
just point out that contemporary mathematical physics is not only leaning
toward, but appears to be deeply committed to discontinuities; both in
classical and quantized field theories dealing with “point charges,” as well
as in general relativity, the (nonquantized field theoretical) geometrody-
namics of graviation, dealing with singularities such as “black holes” or
“initial singularities” of various sorts.
Discontinuities were introduced quite naturally as electromagnetic
pulses, which can, for instance be described with the Heaviside function
H(t ) representing vanishing, zero field strength until time t = 0, when
suddenly a constant electrical field is “switched on eternally.” It is quite
natural to ask what the derivative of the (infinite pulse) function H(t )
might be. — At this point the reader is kindly ask to stop reading for a
moment and contemplate on what kind of function that might be.
Heuristically, if we call this derivative the (Dirac) delta function δ de-
fined by δ(t ) = d H(t )d t , we can assure ourselves of two of its properties (i)
“δ(t ) = 0 for t 6= 0,” as well as as the antiderivative of the Heaviside func-
tion, yielding (ii) “∫ ∞−∞δ(t )d t = ∫ ∞
−∞d H(t )
d t d t = H(∞)−H(−∞) = 1−0 = 1.” This heuristic definition of the Dirac deltafunction δy (x) = δ(x, y) = δ(x − y) with adiscontinuity at y is not unlike the discreteKronecker symbol δi j , as (i) δi j = 0for i 6= j , as well as (ii) “
∑∞i=−∞δi j =∑∞
i=−∞δi j = 1,” We may even define theKronecker symbol δi j as the differencequotient of some “discrete Heavisidefunction” Hi j = 1 for i ≥ j , and Hi , j = 0else: δi j = Hi j −H(i−1) j = 1 only for i = j ;else it vanishes.
Indeed, we could follow a pattern of “growing discontinuity,” reachable
by ever higher and higher derivatives of the absolute value (or modulus);
150 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
that is, we shall pursue the path sketched by
|x|d
d x−→ sgn(x), H(x)d
d x−→ δ(x)dn
d xn−→ δ(n)(x).
Objects like |x|, H(t ) or δ(t ) may be heuristically understandable as
“functions” not unlike the regular analytic functions; alas their nth deriva-
tives cannot be straightforwardly defined. In order to cope with a formally
precised definition and derivation of (infinite) pulse functions and to
achieve this goal, a theory of generalized functions, or, used synonymously,
distributions has been developed. In what follows we shall develop the
theory of distributions; always keeping in mind the assumptions regarding
(dis)continuities that make necessary this part of calculus.
Thereby, we shall “pair” these generalized functions F with suitable
“good” test functions ϕ; integrate over these pairs, and thereby obtain a
linear continuous functional F [ϕ], also denoted by ⟨F ,ϕ⟩. A further strat-
egy then is to “transfer” or “shift” operations on and transformations of F
– such as differentiations or Fourier transformations, but also multiplica-
tions with polynomials or other smooth functions – to the test function ϕ
according to adjoint identities See Sect. 2.3 inRobert Strichartz. A Guide to Distribution
Theory and Fourier Transforms. CRC Press,Boca Roton, Florida, USA, 1994. ISBN0849382734
⟨TF ,ϕ⟩ = ⟨F ,Sϕ⟩. (10.1)
For example, for n-fold differention,
S= (−1)nT= (−1)n d (n)
d x(n), (10.2)
and for the Fourier transformation,
S= T=F . (10.3)
For some (smooth) functional multiplier g (x) ∈C∞ ,
S= T= g (x). (10.4)
One more issue is the problem of the meaning and existence of weak
solutions (also called generalized solutions) of differential equations for
which, if interpreted in terms of regular functions, the derivatives may not
all exist.
Take, for example, the wave equation in one spatial dimension ∂2
∂t 2 u(x, t ) =c2 ∂2
∂x2 u(x, t ). It has a solution of the form 4 u(x, t ) = f (x − ct )+ g (x + ct ), 4 Asim O. Barut. e = —hω. Physics LettersA, 143(8):349–352, 1990. ISSN 0375-9601.D O I : 10.1016/0375-9601(90)90369-Y.URL http://dx.doi.org/10.1016/
0375-9601(90)90369-Y
where f and g characterize a travelling “shape” of inert, unchanged form.
There is no obvious physical reason why the pulse shape function f or g
should be differentiable, alas if it is not, then u is not differentiable either.
What if we, for instance, set g = 0, and identify f (x − ct ) with the Heaviside
infinite pulse function H(x − ct )?
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 151
10.2 General distributionA nice video on “Setting Up theFourier Transform of a Distri-bution” by Professor Dr. Brad G.Osgood - Stanford is availalable via URLshttp://www.academicearth.org/lectures/setting-up-fourier-transform-of-distribution
Suppose we have some “function” F (x); that is, F (x) could be either a regu-
lar analytical function, such as F (x) = x, or some other, “weirder function,”
such as the Dirac delta function, or the derivative of the Heaviside (unit
step) function, which might be “highly discontinuous.” As an Ansatz, we
may associate with this “function” F (x) a distribution, or, used synony-
mously, a generalized function F [ϕ] or ⟨F ,ϕ⟩ which in the “weak sense” is
defined as a continuous linear functional by integrating F (x) together with
some “good” test function ϕ as follows 5: 5 Laurent Schwartz. Introduction to theTheory of Distributions. University ofToronto Press, Toronto, 1952. collected andwritten by Israel Halperin
F (x) ←→⟨F ,ϕ⟩ ≡ F [ϕ] =∫ ∞
−∞F (x)ϕ(x)d x. (10.5)
We say that F [ϕ] or ⟨F ,ϕ⟩ is the distribution associated with or induced by
F (x).
One interpretation of F [ϕ] ≡ ⟨F ,ϕ⟩ is that F stands for a sort of “mea-
surement device,” and ϕ represents some “system to be measured;” then
F [ϕ] ≡ ⟨F ,ϕ⟩ is the “outcome” or “measurement result.”
Thereby, it completely suffices to say what F “does to” some test func-
tion ϕ; there is nothing more to it.
For example, the Dirac Delta function δ(x) is completely characterised
by
δ(x) ←→ δ[ϕ] ≡ ⟨δ,ϕ⟩ =ϕ(0);
likewise, the shifted Dirac Delta function δy (x) ≡ δ(x − y) is completely
characterised by
δy (x) ≡ δ(x − y) ←→ δy [ϕ] ≡ ⟨δy ,ϕ⟩ =ϕ(y).
Many other (regular) functions which are usually not integrable in the
interval (−∞,+∞) will, through the pairing with a “suitable” or “good” test
function ϕ, induce a distribution.
For example, take
1 ←→ 1[ϕ] ≡ ⟨1,ϕ⟩ =∫ ∞
−∞ϕ(x)d x,
or
x ←→ x[ϕ] ≡ ⟨x,ϕ⟩ =∫ ∞
−∞xϕ(x)d x,
or
e2πi ax ←→ e2πi ax [ϕ] ≡ ⟨e2πi ax ,ϕ⟩ =∫ ∞
−∞e2πi axϕ(x)d x.
10.2.1 Duality
Sometimes, F [ϕ] ≡ ⟨F ,ϕ⟩ is also written in a scalar product notation; that
is, F [ϕ] = ⟨F |ϕ⟩. This emphasizes the pairing aspect of F [ϕ] ≡ ⟨F ,ϕ⟩. It can
152 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
also be shown that the set of all distributions F is the dual space of the set
of test functions ϕ.
10.2.2 Linearity
Recall that a linear functional is some mathematical entity which maps a
function or another mathematical object into scalars in a linear manner;
that is, as the integral is linear, we obtain
F [c1ϕ1 + c2ϕ2] = c1F [ϕ1]+ c2F [ϕ2]; (10.6)
or, in the bracket notation,
⟨F ,c1ϕ1 + c2ϕ2⟩ = c1⟨F ,ϕ1⟩+ c2⟨F ,ϕ2⟩. (10.7)
This linearity is guaranteed by integration.
10.2.3 Continuity
One way of expressing continuity is the following:
if ϕnn→∞−→ ϕ, then F [ϕn]
n→∞−→ F [ϕ], (10.8)
or, in the bracket notation,
if ϕnn→∞−→ ϕ, then ⟨F ,ϕn⟩ n→∞−→ ⟨F ,ϕ⟩. (10.9)
10.3 Test functions
10.3.1 Desiderata on test functions
By invoking test functions, we would like to be able to differentiate distri-
butions very much like ordinary functions. We would also like to transfer
differentiations to the functional context. How can this be implemented in
terms of possible “good” properties we require from the behaviour of test
functions, in accord with our wishes?
Consider the partial integration obtained from (uv)′ = u′v +uv ′; thus∫(uv)′ = ∫
u′v + ∫uv ′, and finally
∫u′v = ∫
(uv)′ − ∫uv ′, thereby effec-
tively allowing us to “shift” or “transfer” the differentiation of the original
function to the test function. By identifying u with the generalized function
g (such as, for instance δ), and v with the test function ϕ, respectively, we
obtain
⟨g ′,ϕ⟩ ≡ g ′[ϕ] =∫ ∞
−∞g ′(x)ϕ(x)d x
= g (x)ϕ(x)∣∣∞−∞−
∫ ∞
−∞g (x)ϕ′(x)d x
= g (∞)ϕ(∞)︸ ︷︷ ︸should vanish
−g (−∞)ϕ(−∞)︸ ︷︷ ︸should vanish
−∫ ∞
−∞g (x)ϕ′(x)d x
=−g [ϕ′] ≡−⟨g ,ϕ′⟩.
(10.10)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 153
We can justify the two main requirements for “good” test functions, at least
for a wide variety of purposes:
1. that they “sufficiently” vanish at infinity – that can, for instance, be
achieved by requiring that their support (the set of arguments x where
g (x) 6= 0) is finite; and
2. that they are continuosly differentiable – indeed, by induction, that they
are arbitrarily often differentiable.
In what follows we shall enumerate three types of suitable test func-
tions satisfying these desiderata. One should, however, bear in mind that
the class of “good” test functions depends on the distribution. Take, for
example, the Dirac delta function δ(x). It is so “concentrated” that any (in-
finitely often) differentiable – even constant – function f (x) defind “around
x = 0” can serve as a “good” test function (with respect to δ), as f (x) only
evaluated at x = 0; that is, δ[ f ] = f (0). This is again an indication of the
duality between distributions on the one hand, and their test functions on
the other hand.
10.3.2 Test function class I
Recall that we require 6 our tests functions ϕ to be infinitely often differ- 6 Laurent Schwartz. Introduction to theTheory of Distributions. University ofToronto Press, Toronto, 1952. collected andwritten by Israel Halperin
entiable. Furthermore, in order to get rid of terms at infinity “in a straight-
forward, simple way,” suppose that their support is compact. Compact
support means that ϕ(x) does not vanish only at a finite, bounded region of
x. Such a “good” test function is, for instance,
ϕσ,a(x) =e
− 11−((x−a)/σ)2 for | x−a
σ | < 1,
0 else.(10.11)
In order to show that ϕσ,a is a suitable test function, we have to prove its
infinite differetiability, as well as the compactness of its support Mϕσ,a . Let
ϕσ,a(x) :=ϕ( x −a
σ
)and thus
ϕ(x) =
e−1
1−x2 for |x| < 1
0 for |x| ≥ 1
This function is drawn in Fig. 10.1.
0.37
−1 1
ϕ(x)
Figure 10.1: Plot of a test function ϕ(x).
First, note, by definition, the support Mϕ = (−1,1), because ϕ(x) van-
ishes outside (−1,1)).
Second, consider the differentiability of ϕ(x); that is ϕ ∈ C∞(R)? Note
that ϕ(0) =ϕ is continuous; and that ϕ(n) is of the form
ϕ(n)(x) =
Pn (x)(x2−1)2n e
1x2−1 for |x| < 1
0 for |x| ≥ 1,
154 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
where Pn(x) is a finite polynomial in x (ϕ(u) = eu =⇒ ϕ′(u) = dϕdu
dud x2
d x2
d x =ϕ(u)
(− 1
(x2−1)2
)2x etc.) and [x = 1−ε] =⇒ x2 = 1−2ε+ε2 =⇒ x2−1 = ε(ε−2)
limx↑1
ϕ(n)(x) = limε↓0
Pn(1−ε)
ε2n(ε−2)2n e1
ε(ε−2) =
= limε↓0
Pn(1)
ε2n22n e−1
2ε =[ε= 1
R
]= lim
R→∞Pn(1)
22n R2ne−R2 = 0,
because the power e−x of e decreases stronger than any polynomial xn .
Note that the complex continuation ϕ(z) is not an analytic function
and cannot be expanded as a Taylor series on the entire complex plane C
although it is infinitely often differentiable on the real axis; that is, although
ϕ ∈ C∞(R). This can be seen from a uniqueness theorem of complex anal-
ysis. Let B ⊆ C be a domain, and let z0 ∈ B the limit of a sequence zn ∈ B ,
zn 6= z0. Then it can be shown that, if two analytic functions f und g on B
coincide in the points zn , then they coincide on the entire domain B .
Now, take B = R and the vanishing analytic function f ; that is, f (x) = 0.
f (x) coincides with ϕ(x) only in R− Mϕ. As a result, ϕ cannot be ana-
lytic. Indeed, ϕσ,~a(x) diverges at x = a ±σ. Hence ϕ(x) cannot be Taylor
expanded, and
C∞(Rk )6=⇒⇐=analytic function
10.3.3 Test function class II
Other “good” test functions are 7 7 Laurent Schwartz. Introduction to theTheory of Distributions. University ofToronto Press, Toronto, 1952. collected andwritten by Israel Halperin
φc,d (x)
1n (10.12)
obtained by choosing n ∈N−0 and −∞≤ c < d ≤∞ and by defining
φc,d (x) =e−
( 1x−c + 1
d−x
)for c < x < d ,
0 else.(10.13)
If ϕ(x) is a “good” test function, then
xαPn(x)ϕ(x) (10.14)
with any Polynomial Pn(x), and in particular xnϕ(x) also is a “good” test
function.
10.3.4 Test function class III: Tempered distributions and Fourier trans-
forms
A particular class of “good” test functions – having the property that they
vanish “sufficiently fast” for large arguments, but are nonzero at any finite
argument – are capable of rendering Fourier transforms of generalized
functions. Such generalized functions are called tempered distributions.
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 155
One example of a test function yielding tempered distribution is the
Gaussian function
ϕ(x) = e−πx2. (10.15)
We can multiply the Gaussian function with polynomials (or take its
derivatives) and thereby obtain a particular class of test functions inducing
tempered distributions.
The Gaussian function is normalized such that∫ ∞
−∞ϕ(x)d x =
∫ ∞
−∞e−πx2
d x
[variable substitution x = tpπ
, d x = d tpπ
]
=∫ ∞
−∞e−π
(tpπ
)2
d
(tpπ
)= 1p
π
∫ ∞
−∞e−t 2
d t
= 1pπ
pπ= 1.
(10.16)
In this evaluation, we have used the Gaussian integral
I =∫ ∞
−∞e−x2
d x =pπ, (10.17)
which can be obtained by considering its square and transforming into
polar coordinates r ,θ; that is,
I 2 =(∫ ∞
−∞e−x2
d x
)(∫ ∞
−∞e−y2
d y
)=
∫ ∞
−∞
∫ ∞
−∞e−(x2+y2)d x d y
=∫ 2π
0
∫ ∞
0e−r 2
r dθdr
=∫ 2π
0dθ
∫ ∞
0e−r 2
r dr
= 2π∫ ∞
0e−r 2
r dr[u = r 2,
du
dr= 2r ,dr = du
2r
]=π
∫ ∞
0e−u du
=π(−e−u∣∣∞0
)=π(−e−∞+e0)
=π.
(10.18)
The Gaussian test function (10.15) has the advantage that, as has been
shown in (9.27), with a certain kind of definition for the Fourier transform,
namely A = 2π and B = 1 in Eq. (9.19), its functional form does not change
156 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
under Fourier transforms. More explicitly, as derived in Eqs. (9.27) and
(9.28),
F [ϕ(x)](k) = ϕ(k) =∫ ∞
−∞e−πx2
e−2πi kx d x = e−πk2. (10.19)
Just as for differentiation discussed later it is possible to “shift” or “trans-
fer” the Fourier transformation from the distribution to the test function as
follows. Suppose we are interested in the Fourier transform F [F ] of some
distribution F . Then, with the convention A = 2π and B = 1 adopted in Eq.
(9.19), we must consider
⟨F [F ],ϕ⟩ ≡F [F ][ϕ] =∫ ∞
−∞F [F ](x)ϕ(x)d x
=∫ ∞
−∞
[∫ ∞
−∞F (y)e−2πi x y d y
]ϕ(x)d x
=∫ ∞
−∞F (y)
[∫ ∞
−∞ϕ(x)e−2πi x y d x
]d y
=∫ ∞
−∞F (y)F [ϕ](y)d y
= ⟨F ,F [ϕ]⟩ ≡ F [F [ϕ]].
(10.20)
in the same way we obtain the Fourier inversion for distributions
⟨F−1[F [F ]],ϕ⟩ = ⟨F [F−1[F ]],ϕ⟩ = ⟨F ,ϕ⟩. (10.21)
Note that, in the case of test functions with compact support – say,
ϕ(x) = 0 for |x| > a > 0 and finite a – if the order of integrals is exchanged,
the “new test function”
F [ϕ](y) =∫ ∞
−∞ϕ(x)e−2πi x y d x
∫ a
−aϕ(x)e−2πi x y d x (10.22)
obtained through a Fourier transform of ϕ(x), does not necessarily inherit
a compact support from ϕ(x); in particular, F [ϕ](y) may not necessarily
vanish [i.e. F [ϕ](y) = 0] for |y | > a > 0.
Let us, with these conventions, compute the Fourier transform of the
tempered Dirac delta distribution. Note that, by the very definition of the
Dirac delta distribution,
⟨F [δ],ϕ⟩ = ⟨δ,F [ϕ]⟩ =F [ϕ](0) =∫ ∞
−∞e−2πi x0ϕ(x)d x =
∫ ∞
−∞1ϕ(x)d x = ⟨1,ϕ⟩.
(10.23)
Thus we may identify F [δ] with 1; that is,
F [δ] = 1. (10.24)
This is an extreme example of an infinitely concentrated object whose
Fourier transform is infinitely spread out.
A very similar calculation renders the tempered distribution associated
with the Fourier transform of the shifted Dirac delta distribution
F [δy ] = e−2πi x y . (10.25)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 157
Alas we shall pursue a different, more conventional, approach, sketched
in Section 10.5.
10.3.5 Test function class IV: C∞
If the generalized functions are “sufficiently concentrated” so that they
themselves guarantee that the terms g (∞)ϕ(∞) as well as g (−∞)ϕ(−∞)
in Eq. (10.10) to vanish, we may just require the test functions to be in-
finitely differentiable – and thus in C∞ – for the sake of making possible a
transfer of differentiation. (Indeed, if we are willing to sacrifice even infi-
nite differentiability, we can widen this class of test functions even more.)
We may, for instance, employ constant functions such as ϕ(x) = 1 as test
functions, thus giving meaning to, for instance, ⟨δ,1⟩ = ∫ ∞−∞δ(x)d x, or
⟨ f (x)δ,1⟩ = ⟨ f (0)δ,1⟩ = f (0)∫ ∞−∞δ(x)d x.
10.4 Derivative of distributions
Equipped with “good” test functions which have a finite support and are
infinitely often (or at least sufficiently often) differentiable, we can now
give meaning to the transferal of differential quotients from the objects
entering the integral towards the test function by partial integration. First
note again that (uv)′ = u′v +uv ′ and thus∫
(uv)′ = ∫u′v +∫
uv ′ and finally∫u′v = ∫
(uv)′− ∫uv ′. Hence, by identifying u with g , and v with the test
function ϕ, we obtain
⟨F ′,ϕ⟩ ≡ F ′ [ϕ]= ∫ ∞
−∞
(d
d xq(x)
)ϕ(x)d x
= q(x)ϕ(x)∣∣∞
x=−∞−∫ ∞
−∞q(x)
(d
d xϕ(x)
)d x
=−∫ ∞
−∞q(x)
(d
d xϕ(x)
)d x
=−F[ϕ′]≡−⟨F ,ϕ′⟩.
(10.26)
By induction we obtain⟨d n
d xn F ,ϕ
⟩⟨F (n),ϕ⟩ ≡ F (n) [ϕ]= (−1)nF
[ϕ(n)] (−1)n⟨F ,ϕ(n)⟩. (10.27)
For the sake of a further example using adjoint identities , swapping
products and differentiations forth and back in the F –ϕ pairing, let us
compute g (x)δ′(x) where g ∈C∞; that is
⟨gδ′,ϕ⟩ = ⟨δ′, gϕ⟩=−⟨δ, (gϕ)′⟩
=−⟨δ, gϕ′+ g ′ϕ⟩=−g (0)ϕ′(0)− g ′(0)ϕ(0)
= ⟨g (0)δ′− g ′(0)δ,ϕ⟩.
(10.28)
158 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Therefore,
g (x)δ′(x) = g (0)δ′(x)− g ′(0)δ(x). (10.29)
10.5 Fourier transform of distributions
We mention without proof that, if fx (x) is a sequence of functions con-
verging, for n → ∞ toward a function f in the functional sense (i.e. via
integration of fn and f with “good” test functions), then the Fourier trans-
form f of f can be defined by 8 8 M. J. Lighthill. Introduction to FourierAnalysis and Generalized Functions. Cam-bridge University Press, Cambridge, 1958;Kenneth B. Howell. Principles of Fourieranalysis. Chapman & Hall/CRC, BocaRaton, London, New York, Washington,D.C., 2001; and B.L. Burrows and D.J. Col-well. The Fourier transform of the unitstep function. International Journal ofMathematical Education in Science andTechnology, 21(4):629–635, 1990. D O I :10.1080/0020739900210418. URL http://
dx.doi.org/10.1080/0020739900210418
F [ f (x)] = f (k) = limn→∞
∫ ∞
−∞fn(x)e−i kx d x. (10.30)
While this represents a method to calculate Fourier transforms of dis-
tributions, there are other, more direct ways of obtaining them. These
were mentioned earlier. In what follows, we shall enumerate the Fourier
transform of some species, mostly by complex analysis.
10.6 Dirac delta function
Historically, the Heaviside step function, which will be discussed later –
was first used for the description of electromagnetic pulses. In the days
when Dirac developed quantum mechanics there was a need to define
“singular scalar products” such as “⟨x | y⟩ = δ(x − y),” with some generaliza-
tion of the Kronecker delta function δi j which is zero whenever x 6= y and
“large enough” needle shaped (see Fig. 10.2) to yield unity when integrated
over the entire reals; that is, “∫ ∞−∞⟨x | y⟩d y = ∫ ∞
−∞δ(x − y)d y = 1.”
10.6.1 Delta sequence
One of the first attempts to formalize these objects with “large discon-
tinuities” was in terms of functional limits. Take, for instance, the delta
sequence which is a sequence of strongly peaked functions for which in
some limit the sequences fn(x − y) with, for instance,
δn(x − y) =
n for y − 12n < x < y + 1
2n
0 else(10.31)
become the delta function δ(x − y). That is, in the functional sense
limn→∞δn(x − y) = δ(x − y). (10.32)
Note that the area of this particular δn(x − y) above the x-axes is indepen-
dent of n, since its width is 1/n and the height is n.
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 159
In an ad hoc sense, other delta sequences are
δn(x) = npπ
e−n2x2, (10.33)
= 1
π
sin(nx)
x, (10.34)
= = (1∓ i )( n
2π
) 12
e±i nx2(10.35)
= 1
πx
e i nx −e−i nx
2i, (10.36)
= 1
π
ne−x2
1+n2x2 , (10.37)
= 1
2π
∫ n
−ne i xt d t = 1
2πi xe i xt
∣∣∣n
−n, (10.38)
= 1
2π
sin[(
n + 12
)x]
sin( 1
2 x) , (10.39)
= 1
π
n
1+n2x2 , (10.40)
= n
π
(sin(nx)
nx
)2
. (10.41)
Other commonly used limit forms of the δ-function are the Gaussian,
Lorentzian, and Dirichlet forms
δε(x) = 1pπε
e−x2
ε2 , (10.42)
= 1
π
ε
x2 +ε2 = 1
2πi
(1
x − iε− 1
x + iε
), (10.43)
= 1
πxsinεx, (10.44)
respectively. Note that (10.42) corresponds to (10.33), (10.43) corresponds
to (10.40) with ε= n−1, and (10.44) corresponds to (10.34). Again, the limit
δ(x) = limε→0
δε(x) (10.45)
has to be understood in the functional sense (see below).
6
δ(x)
xFigure 10.2: Dirac’s δ-function as a “needleshaped” generalized function.
6δ(x)
xrrrr r r r r
b
bbb
b
bb b δ1(x)
δ2(x)
δ3(x)
δ4(x)
Figure 10.3: Delta sequence approximatingDirac’s δ-function as a more and more“needle shaped” generalized function.
Naturally, such “needle shaped functions” were viewed suspiciouly
by many mathematicians at first, but later they embraced these types of
functions 9 by developing a theory of functional analysis or distributions.
9 I. M. Gel’fand and G. E. Shilov. Gener-alized Functions. Vol. 1: Properties andOperations. Academic Press, New York,1964. Translated from the Russian byEugene Saletan
10.6.2 δ[ϕ
]distribution
In particular, the δ function maps to∫ ∞
−∞δ(x − y)ϕ(x)d x =ϕ(y). (10.46)
A common way of expressing this is by writing
δ(x − y) ←→ δy [ϕ] =ϕ(y). (10.47)
160 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For y = 0, we just obtain
δ(x) ←→ δ[ϕ]def= δ0[ϕ] =ϕ(0). (10.48)
Let us see if the sequence δn with
δn(x − y) =
n for y − 12n < x < y + 1
2n
0 else
defined in Eq. (10.31) and depicted in Fig. 10.3 is a delta sequence; that is,
if, for large n, it converges to δ in a functional sense. In order to verify this
claim, we have to integrate δn(x) with “good” test functions ϕ(x) and take
the limit n →∞; if the result is ϕ(0), then we can identify δn(x) in this limit
with δ(x) (in the functional sense). Since δn(x) is uniform convergent, we
can exchange the limit with the integration; thus
limn→∞
∫ ∞
−∞δn(x − y)ϕ(x)d x
[variable transformation:
x ′ = x − y , x = x ′+ y
d x ′ = d x,−∞≤ x ′ ≤∞]
= limn→∞
∫ ∞
−∞δn(x ′)ϕ(x ′+ y)d x ′
= limn→∞
∫ 12n
− 12n
nϕ(x ′+ y)d x ′
[variable transformation:
u = 2nx ′, x ′ = u
2n,
du = 2nd x ′,−1 ≤ u ≤ 1]
= limn→∞
∫ 1
−1nϕ(
u
2n+ y)
du
2n
= limn→∞
1
2
∫ 1
−1ϕ(
u
2n+ y)du
= 1
2
∫ 1
−1lim
n→∞ϕ(u
2n+ y)du
= 1
2ϕ(y)
∫ 1
−1du
=ϕ(y).
(10.49)
Hence, in the functional sense, this limit yields the δ-function. Thus we
obtain
limn→∞δn[ϕ] = δ[ϕ] =ϕ(0).
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 161
10.6.3 Useful formulæ involving δ
The following formulae are mostly enumerated without proofs.
δ(x) = δ(−x) (10.50)
For a proof, note that ϕ(x)δ(−x) = ϕ(0)δ(−x), and that, in particular, with
the substitution x →−x,∫ ∞
−∞δ(−x)ϕ(x)d x =ϕ(0)
∫ −∞
∞δ(−(−x))d(−x)
=−ϕ(0)∫ −∞
∞δ(x)d x
=ϕ(0)∫ ∞
−∞δ(x)d x.
(10.51)
δ(x) = limε→0
H(x +ε)−H(x)
ε= d
d xH(x) (10.52)
f (x)δ(x −x0) = f (x0)δ(x −x0) (10.53)
This results from a direct application of Eq. (10.4); that is,
f (x)δ[ϕ] = δ[f ϕ
]= f (0)ϕ(0) = f (0)δ[ϕ], (10.54)
and
f (x)δx0 [ϕ] = δx0
[f ϕ
]= f (x0)ϕ(x0) = f (x0)δx0 [ϕ]. (10.55)
For a more explicit direct proof, note that∫ ∞
−∞f (x)δ(x −x0)ϕ(x)d x =
∫ ∞
−∞δ(x −x0)( f (x)ϕ(x))d x = f (x0)ϕ(x0),
(10.56)
and hence f δx0 [ϕ] = f (x0)δx0 [ϕ].
xδ(x) = 0 (10.57)
For a 6= 0,
δ(ax) = 1
|a|δ(x), (10.58)
and, more generally,
δ(a(x −x0)) = 1
|a|δ(x −x0) (10.59)
For the sake of a proof, consider the case a > 0 as well as x0 = 0 first:∫ ∞
−∞δ(ax)ϕ(x)d x
[variable substitution y = ax, y = y
a
= 1
a
∫ ∞
−∞δ(y)ϕ(
y
a)d y
= 1
aϕ(0);
(10.60)
162 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
and, second, the case a < 0: ∫ ∞
−∞δ(ax)ϕ(x)d x
[variable substitution y = ax, x = y
a
= 1
a
∫ −∞
∞δ(y)ϕ(
y
a)d y
=− 1
a
∫ ∞
−∞δ(y)ϕ(
y
a)d y
=− 1
aϕ(0).
(10.61)
In the case of x0 6= 0 and a > 0, we obtain∫ ∞
−∞δ(a(x −x0))ϕ(x)d x
[variable substitution y = a(x −x0), x = y
a+x0
= 1
a
∫ ∞
−∞δ(y)ϕ(
y
a+x0)d y
= 1
aϕ(x0).
(10.62)
If there exists a simple singularity x0 of f (x) in the integration interval,
then
δ( f (x)) = 1
| f ′(x0)|δ(x −x0). (10.63)
More generally, if f has only simple roots and f ′ is nonzero there,
δ( f (x)) =∑xi
δ(x −xi )
| f ′(xi )| (10.64)
where the sum extends over all simple roots xi in the integration interval.
In particular,
δ(x2 −x20) = 1
2|x0|[δ(x −x0)+δ(x +x0)] (10.65)
For a proof, note that, since f has only simple roots, it can be expanded
around these roots by
f (x) ≈ (x −x0) f ′(x0)
with nonzero f ′(x0) ∈ R. By identifying f ′(x0) with a in Eq. (10.58) we
obtain Eq. (10.64).
δ′( f (x)) =N∑
i=0
f ′′(xi )
| f ′(xi )|3 δ(x −xi )+N∑
i=0
f ′(xi )
| f ′(xi )|3 δ′(x −xi ) (10.66)
|x|δ(x2) = δ(x) (10.67)
−xδ′(x) = δ(x), (10.68)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 163
which is a direct consequence of Eq. (10.29).
δ(n)(−x) = (−1)mδ(n)(x), (10.69)
where the index (n) denotes n–fold differentiation, can be proven by∫ ∞
−∞δ(n)(−x)ϕ(x)d x = (−1)n
∫ −∞
∞δ(−x)ϕ(n)(x)d x
= (−1)n∫ −∞
∞δ(x)ϕ(n)(x)d x
= (−1)n∫ ∞
−∞δ(n)(x)ϕ(x)d x.
(10.70)
xm+1δ(m)(x) = 0, (10.71)
where the index (m) denotes m–fold differentiation;
x2δ′(x) = 0, (10.72)
which is a direct consequence of Eq. (10.29).
More generally,
xnδ(m)(x)[ϕ= 1] =∫ ∞
−∞xnδ(m)(x)d x = (−1)nn!δnm , (10.73)
which can be demonstrated by considering
⟨xnδ(m)|1⟩ = ⟨δ(m)|xn⟩
= (−1)n⟨δ| d (m)
d x(m)xn⟩
= (−1)nn!δnm ⟨δ|1⟩︸ ︷︷ ︸1
= (−1)nn!δnm .
(10.74)
d 2
d x2 [xH(x)] = d
d x[H(x)+xδ(x)︸ ︷︷ ︸
0
] = d
d xH(x) = δ(x) (10.75)
If δ3(~r ) = δ(x)δ(y)δ(r ) with~r = (x, y , z), then
δ3(~r ) = δ(x)δ(y)δ(z) =− 1
4π∆
1
r(10.76)
δ3(~r ) =− 1
4π(∆+k2)
e i kr
r(10.77)
δ3(~r ) =− 1
4π(∆+k2)
coskr
r(10.78)
In quantum field theory, phase space integrals of the form
1
2E=
∫d p0 H(p0)δ(p2 −m2) (10.79)
if E = (~p2 +m2)(1/2) are exploited.
164 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
10.6.4 Fourier transform of δ
The Fourier transform of the δ-function can be obtained straightforwardly
by insertion into Eq. (9.19); that is, with A = B = 1
F [δ(x)] = δ(k) =∫ ∞
−∞δ(x)e−i kx d x
= e−i 0k∫ ∞
−∞δ(x)d x
= 1, and thus
F−1[δ(k)] =F−1[1] = δ(x)
= 1
2π
∫ ∞
−∞e i kx dk
= 1
2π
∫ ∞
−∞[cos(kx)+ i sin(kx)]dk
= 1
π
∫ ∞
0cos(kx)dk + i
2π
∫ ∞
−∞sin(kx)dk
= 1
π
∫ ∞
0cos(kx)dk.
(10.80)
That is, the Fourier transform of the δ-function is just a constant. δ-
spiked signals carry all frequencies in them. Note also that F [δ(x − y)] =e i k yF [δ(x)].
From Eq. (10.80 ) we can compute
F [1] = 1(k) =∫ ∞
−∞e−i kx d x
[variable substitution x →−x]
=∫ −∞
+∞e−i k(−x)d(−x)
=−∫ −∞
+∞e i kx d x
=∫ +∞
−∞e i kx d x
= 2πδ(k).
(10.81)
10.6.5 Eigenfunction expansion of δ
The δ-function can be expressed in terms of, or “decomposed” into, var-
ious eigenfunction expansions. We mention without proof 10 that, for 10 Dean G. Duffy. Green’s Functions withApplications. Chapman and Hall/CRC,Boca Raton, 2001
0 < x, x0 < L, two such expansions in terms of trigonometric functions are
δ(x −x0) = 2
L
∞∑k=1
sin
(πkx0
L
)sin
(πkx
L
)= 1
L+ 2
L
∞∑k=1
cos
(πkx0
L
)cos
(πkx
L
).
(10.82)
This “decomposition of unity” is analoguous to the expansion of the
identity in terms of orthogonal projectors Ei (for one-dimensional projec-
tors, Ei = |i ⟩⟨i |) encountered in the spectral theorem 4.27.1.
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 165
Other decomposions are in terms of orthonormal (Legendre) polynomi-
als (cf. Sect. 14.6 on page 233), or other functions of mathematical physics
discussed later.
10.6.6 Delta function expansion
Just like “slowly varying” functions can be expanded into a Taylor series
in terms of the power functions xn , highly localized functions can be ex-
panded in terms of derivatives of the δ-function in the form 11 11 Ismo V. Lindell. Delta function ex-pansions, complex delta functions andthe steepest descent method. Ameri-can Journal of Physics, 61(5):438–442,1993. D O I : 10.1119/1.17238. URLhttp://dx.doi.org/10.1119/1.17238
f (x) ∼ f0δ(x)+ f1δ′(x)+ f2δ
′′(x)+·· ·+ fnδ(n)(x)+·· · =
∞∑k=1
fkδ(k)(x),
with fk = (−1)k
k!
∫ ∞
−∞f (y)yk d y .
(10.83)
The sign “∼” denotes the functional character of this “equation” (10.83).
The delta expansion (10.83) can be proven by considering a smooth
function g (x), and integrating over its expansion; that is, ∫ ∞
−∞g (x) f (x)d x
=∫ ∞
−∞g (x)
[f0δ(x)+ f1δ
′(x)+ f2δ′′(x)+·· ·+ (−1)n fnδ
(n)(x)+·· ·]d x
= f0g (0)− f1g ′(0)+ f2g ′′(0)+·· ·+ (−1)n fn g (n) +·· · ,
(10.84)
and comparing the coefficients in (10.84) with the coefficients of the Taylor
series expansion of g at x = 0∫ ∞
−∞g (x) f (x) =
∫ ∞
−∞
[g (0)+xg ′(0)+·· ·+ xn
n!g (n)(0)+·· ·
]f (x)d x
= g (0)∫ ∞
−∞f (x)d x + g ′(0)
∫ ∞
−∞x f (x)d x +·· ·+ g (n)(0)
∫ ∞
−∞xn
n!f (x)d x +·· · .
(10.85)
10.7 Cauchy principal value
10.7.1 Definition
The (Cauchy) principal value P (sometimes also denoted by p.v.) is a value
associated with a (divergent) integral as follows:
P
∫ b
af (x)d x = lim
ε→0+
[∫ c−ε
af (x)d x +
∫ b
c+εf (x)d x
]= limε→0+
∫[a,c−ε]∪[c+ε,b]
f (x)d x,(10.86)
if c is the “location” of a singularity of f (x).
166 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For example, the integral∫ 1−1
d xx diverges, but
P
∫ 1
−1
d x
x= limε→0+
[∫ −ε
−1
d x
x+
∫ 1
+εd x
x
][variable substitution x →−x in the first integral]
= limε→0+
[∫ +ε
+1
d x
x+
∫ 1
+εd x
x
]= limε→0+
[logε− log1+ log1− logε
]= 0.
(10.87)
10.7.2 Principle value and pole function 1x distribution
The “standalone function” 1x does not define a distribution since it is not
integrable in the vicinity of x = 0. This issue can be “alleviated” or “circum-
vented” by considering the principle value P 1x . In this way the principle
value can be transferred to the context of distributions by defining a princi-
pal value distribution in a functional sense by
P
(1
x
)[ϕ
]= limε→0+
∫|x|>ε
1
xϕ(x)d x
= limε→0+
[∫ −ε
−∞1
xϕ(x)d x +
∫ ∞
+ε1
xϕ(x)d x
][variable substitution x →−x in the first integral]
= limε→0+
[∫ +ε
+∞1
xϕ(−x)d x +
∫ ∞
+ε1
xϕ(x)d x
]= limε→0+
[−
∫ ∞
+ε1
xϕ(−x)d x +
∫ ∞
+ε1
xϕ(x)d x
]= limε→0+
∫ +∞
ε
ϕ(x)−ϕ(−x)
xd x
=∫ +∞
0
ϕ(x)−ϕ(−x)
xd x.
(10.88)
In the functional sense, 1x
[ϕ
]can be interpreted as a principal value.
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 167
That is,
1
x
[ϕ
]= ∫ ∞
−∞1
xϕ(x)d x
=∫ 0
−∞1
xϕ(x)d x +
∫ ∞
0
1
xϕ(x)d x
[variable substitution x →−x,d x →−d x in the first integral]
=∫ 0
+∞1
(−x)ϕ(−x)d(−x)+
∫ ∞
0
1
xϕ(x)d x
=∫ 0
+∞1
xϕ(−x)d x +
∫ ∞
0
1
xϕ(x)d x
=−∫ ∞
0
1
xϕ(−x)d x +
∫ ∞
0
1
xϕ(x)d x
=∫ ∞
0
ϕ(x)−ϕ(−x)
xd x
=P
(1
x
)[ϕ
],
(10.89)
where in the last step the principle value distribution (10.88) has been
used.
10.8 Absolute value distribution
The distribution associated with the absolute value |x| is defined by
|x|[ϕ]= ∫ ∞
−∞|x|ϕ(x)d x. (10.90)
|x|[ϕ]can be evaluated and represented as follows:
|x|[ϕ]= ∫ ∞
−∞|x|ϕ(x)d x
=∫ 0
−∞(−x)ϕ(x)d x +
∫ ∞
0xϕ(x)d x
=−∫ 0
−∞xϕ(x)d x +
∫ ∞
0xϕ(x)d x
[variable substitution x →−x,d x →−d x in the first integral]
=−∫ 0
+∞xϕ(−x)d x +
∫ ∞
0xϕ(x)d x
=∫ ∞
0xϕ(−x)d x +
∫ ∞
0xϕ(x)d x
=∫ ∞
0x
[ϕ(x)+ϕ(−x)
]d x.
(10.91)
168 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
10.9 Logarithm distribution
10.9.1 Definition
Let, for x 6= 0,
log |x|[ϕ]= ∫ ∞
−∞log |x|ϕ(x)d x
=∫ 0
−∞log(−x)ϕ(x)d x +
∫ ∞
0log xϕ(x)d x
[variable substitution x →−x,d x →−d x in the first integral]
=∫ 0
+∞log(−(−x))ϕ(−x)d(−x)+
∫ ∞
0log xϕ(x)d x
=−∫ 0
+∞log xϕ(−x)d x +
∫ ∞
0log xϕ(x)d x
=∫ ∞
0log xϕ(−x)d x +
∫ ∞
0log xϕ(x)d x
=∫ ∞
0log x
[ϕ(x)+ϕ(−x)
]d x.
(10.92)
10.9.2 Connection with pole function
Note that
P
(1
x
)[ϕ
]= d
d xlog |x|[ϕ]
, (10.93)
and thus for the principal value of a pole of degree n
P
(1
xn
)[ϕ
]= (−1)n−1
(n −1)!d n
d xn log |x|[ϕ]. (10.94)
For a proof, consider the functional derivative log′ |x|[ϕ] of log |x|[ϕ] by
insertion into Eq. (10.92); that is
log′ |x|[ϕ] =∫ 0
−∞d log(−x)
d xϕ(x)d x +
∫ ∞
0
d log x
d xϕ(x)d x
=∫ 0
−∞
(− 1
(−x)
)ϕ(x)d x +
∫ ∞
0
1
xϕ(x)d x
=∫ 0
−∞1
xϕ(x)d x +
∫ ∞
0
1
xϕ(x)d x
[variable substitution x →−x,d x →−d x in the first integral]
=∫ 0
+∞1
(−x)ϕ(−x)d(−x)+
∫ ∞
0
1
xϕ(x)d x
=∫ 0
+∞1
xϕ(−x)d x +
∫ ∞
0
1
xϕ(x)d x
=−∫ ∞
0
1
xϕ(−x)d x +
∫ ∞
0
1
xϕ(x)d x
=∫ ∞
0
ϕ(x)−ϕ(−x)
xd x
=P
(1
x
)[ϕ
].
(10.95)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 169
10.10 Pole function 1xn distribution
For n ≥ 2, the integral over 1xn is undefined even if we take the principal
value. Hence the direct route to an evaluation is blocked, and we have to
take an indirect approch via derivatives of 1x
12. Thus, let 12 Thomas Sommer. VerallgemeinerteFunktionen. unpublished manuscript,20121
x2
[ϕ
]=− d
d x
1
x
[ϕ
]= 1
x
[ϕ′]= ∫ ∞
0
1
x
[ϕ′(x)−ϕ′(−x)
]d x
=P
(1
x
)[ϕ′] .
(10.96)
Also,
1
x3
[ϕ
]=−1
2
d
d x
1
x2
[ϕ
]= 1
2
1
x2
[ϕ′]= 1
2x
[ϕ′′]
= 1
2
∫ ∞
0
1
x
[ϕ′′(x)−ϕ′′(−x)
]d x
= 1
2P
(1
x
)[ϕ′′] .
(10.97)
More generally, for n > 1, by induction, using (10.96) as induction basis,
1
xn
[ϕ
]=− 1
n −1
d
d x
1
xn−1
[ϕ
]= 1
n −1
1
xn−1
[ϕ′]
=−(
1
n −1
)(1
n −2
)d
d x
1
xn−2
[ϕ′]= 1
(n −1)(n −2)
1
xn−2
[ϕ′′]
= ·· · = 1
(n −1)!1
x
[ϕ(n−1)]
= 1
(n −1)!
∫ ∞
0
1
x
[ϕ(n−1)(x)−ϕ(n−1)(−x)
]d x
= 1
(n −1)!P
(1
x
)[ϕ(n−1)] .
(10.98)
10.11 Pole function 1x±iα distribution
We are interested in the limit α→ 0 of 1x+iα . Let α> 0. Then,
1
x + iα
[ϕ
]= ∫ ∞
−∞1
x + iαϕ(x)d x
=∫ ∞
−∞x − iα
(x + iα)(x − iα)ϕ(x)d x
=∫ ∞
−∞x − iα
x2 +α2ϕ(x)d x
=∫ ∞
−∞x
x2 +α2ϕ(x)d x +−iα∫ ∞
−∞1
x2 +α2ϕ(x)d x.
(10.99)
Let us treat the two summands of (10.99) separately. (i) Upon variable
170 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
substitution x =αy , d x =αd y in the second integral in (10.99) we obtain
α
∫ ∞
−∞1
x2 +α2ϕ(x)d x =α∫ ∞
−∞1
α2 y2 +α2ϕ(αy)αd y
=α2∫ ∞
−∞1
α2(y2 +1)ϕ(αy)d y
=∫ ∞
−∞1
y2 +1ϕ(αy)d y
(10.100)
In the limit α→ 0, this is
limα→0
∫ ∞
−∞1
y2 +1ϕ(αy)d y =ϕ(0)
∫ ∞
−∞1
y2 +1d y
=ϕ(0)(arctan y
)∣∣∞y=−∞
=πϕ(0) =πδ[ϕ].
(10.101)
(ii) The first integral in (10.99) is ∫ ∞
−∞x
x2 +α2ϕ(x)d x
=∫ 0
−∞x
x2 +α2ϕ(x)d x +∫ ∞
0
x
x2 +α2ϕ(x)d x
=∫ 0
+∞−x
(−x)2 +α2ϕ(−x)d(−x)+∫ ∞
0
x
x2 +α2ϕ(x)d x
=−∫ ∞
0
x
x2 +α2ϕ(−x)d x +∫ ∞
0
x
x2 +α2ϕ(x)d x
=∫ ∞
0
x
x2 +α2
[ϕ(x)−ϕ(−x)
]d x.
(10.102)
In the limit α→ 0, this becomes
limα→0
∫ ∞
0
x
x2 +α2
[ϕ(x)−ϕ(−x)
]d x =
∫ ∞
0
ϕ(x)−ϕ(−x)
xd x
=P
(1
x
)[ϕ
],
(10.103)
where in the last step the principle value distribution (10.88) has been
used.
Putting all parts together, we obtain
1
x + i 0+[ϕ
]= limα→0
1
x + iα
[ϕ
]=P
(1
x
)[ϕ
]− iπδ[ϕ] =P
(1
x
)− iπδ
[ϕ].
(10.104)
A very similar calculation yields
1
x − i 0+[ϕ
]= limα→0
1
x − iα
[ϕ
]=P
(1
x
)[ϕ
]+ iπδ[ϕ] =P
(1
x
)+ iπδ
[ϕ].
(10.105)
These equations (10.104) and (10.105) are often called the Sokhotsky for-
mula, also known as the Plemelj formula, or the Plemelj-Sokhotsky for-
mula.
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 171
10.12 Heaviside step function
10.12.1 Ambiguities in definition
Let us now turn to some very common generalized functions; in particular
to Heaviside’s electromagnetic infinite pulse function. One of the possible
definitions of the Heaviside step function H(x), and maybe the most com-
mon one – they differ by the difference of the value(s) of H(0) at the origin
x = 0, a difference which is irrelevant measure theoretically for “good”
functions since it is only about an isolated point – is
H(x −x0) =
1 for x ≥ x0
0 for x < x0(10.106)
The function is plotted in Fig. 10.4.
xb
rH(x)
Figure 10.4: Plot of the Heaviside stepfunction H(x).
In the spirit of the above definition, it might have been more appropri-
ate to define H(0) = 12 ; that is,
H(x −x0) =
1 for x > x012 for x = x0
0 for x < x0
(10.107)
and, since this affects only an isolated point at x = 0, we may happily do so
if we prefer.
It is also very common to define the Heaviside step function as the
antiderivative of the δ function; likewise the delta function is the derivative
of the Heaviside step function; that is,
H(x −x0) =∫ x−x0
−∞δ(t )d t ,
d
d xH(x −x0) = δ(x −x0).
(10.108)
The latter equation can – in the functional sense – be proven through
⟨H ′,ϕ⟩ =−⟨H ,ϕ′⟩
=−∫ ∞
−∞H(x)ϕ′(x)d x
=−∫ ∞
0ϕ′(x)d x
=− ϕ(x)∣∣x=∞
x=0
=−ϕ(∞)︸ ︷︷ ︸=0
+ϕ(0)
= ⟨δ,ϕ⟩
(10.109)
for all test functions ϕ(x). Hence we can – in the functional sense – identify
172 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
δ with H ′. More explicitly, through integration by parts, we obtain∫ ∞
−∞
[d
d xH(x −x0)
]ϕ(x)d x
= H(x −x0)ϕ(x)∣∣∞−∞−
∫ ∞
−∞H(x −x0)
[d
d xϕ(x)
]d x
= H(∞)ϕ(∞)︸ ︷︷ ︸ϕ(∞)=0
−H(−∞)ϕ(−∞)︸ ︷︷ ︸02=0
−∫ ∞
x0
[d
d xϕ(x)
]d x
=−∫ ∞
x0
[d
d xϕ(x)
]d x
=− ϕ(x)∣∣x=∞
x=x0
=−[ϕ(∞)−ϕ(x0)]
=ϕ(x0).
(10.110)
10.12.2 Useful formulæ involving H
Some other formulæ involving the Heaviside step function are
H(±x) = limε→0+
∓i
2π
∫ +∞
−∞e i kx
k ∓ iεdk, (10.111)
and
H(x) = 1
2+
∞∑l=0
(−1)l (2l )!(4l +3)
22l+2l !(l +1)!P2l+1(x), (10.112)
where P2l+1(x) is a Legendre polynomial. Furthermore,
δ(x) = limε→0
1
εH
( ε2−|x|
). (10.113)
An integral representation of H(x) is
H(x) = limε ↓ 0+∓ 1
2πi
∫ ∞
−∞1
t ± iεe∓i xt d t . (10.114)
One commonly used limit form of the Heaviside step function is
H(x) = limε→0
Hε(x) = limε→0
[1
2+ 1
πtan−1 x
ε
]. (10.115)
respectively.
Another limit representation of the Heaviside function is in terms of the
Dirichlet’s discontinuity factor
H(x) = limt→∞Ht (x)
= 2
πlim
t→∞
∫ t
0
sin(kx)
kdk
= 2
π
∫ ∞
0
sin(kx)
xd x.
(10.116)
A proof uses a variant 13 of the sine integral function 13 Eli Maor. Trigonometric Delights.Princeton University Press, Princeton,1998. URL http://press.princeton.
edu/books/maor/
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 173
Si(x) =∫ x
0
sin t
td t (10.117)
which in the limit of large argument converges towards the Dirichlet inte-
gral (no proof is given here)
Si(∞) =∫ ∞
0
sin t
td t = π
2. (10.118)
In the Dirichlet integral (10.118), if we replace t with t x and substite u for
t x (hence, by identifying u = t x and du/d t = x, thus d t = du/x), we arrive
at ∫ ∞
0
sin(xt )
td t
∫ ±∞
0
sin(u)
udu. (10.119)
The ∓ sign depends on whether k is positive or negative. The Dirichlet in-
tegral can be restored in its original form (10.118) by a further substitution
u →−u for negative k. Due to the odd function sin this yields −π/2 or +π/2
for negative or positive k, respectively. The Dirichlet’s discontinuity factor
(10.116) is obtained by normalizing (10.119) to unity by multiplying it with
2/π.
10.12.3 H[ϕ
]distribution
The distribution associated with the Heaviside functiom H(x) is defined by
H[ϕ
]= ∫ ∞
−∞H(x)ϕ(x)d x. (10.120)
H[ϕ
]can be evaluated and represented as follows:
H[ϕ
]= ∫ ∞
−∞H(x)ϕ(x)d x
=∫ ∞
0ϕ(x)d x.
(10.121)
10.12.4 Regularized regularized Heaviside function
In order to be able to define the distribution associated with the Heaviside
function (and its Fourier transform), we sometimes consider the distribu-
tion of the regularized Heaviside function
Hε(x) = H(x)e−εx , (10.122)
with ε> 0, such that limε→0+ Hε(x) = H(x).
10.12.5 Fourier transform of Heaviside (unit step) function
The Fourier transform of the Heaviside (unit step) function cannot be
directly obtained by insertion into Eq. (9.19), because the associated inte-
grals do not exist. For a derivation of the Fourier transform of the Heaviside
(unit step) function we shall thus use the regularized Heaviside function
174 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
(10.122), and arrive at Sokhotsky’s formula (also known as the Plemelj’s
formula, or the Plemelj-Sokhotsky formula)
F [H(x)] = H(k) =∫ ∞
−∞H(x)e−i kx d x
=πδ(k)− iP1
k
=−i
(iπδ(k)+P
1
k
)= limε→0+
− i
k − iε
(10.123)
We shall compute the Fourier transform of the regularized Heaviside
function Hε(x) = H(x)e−εx , with ε> 0, of Eq. (10.122); that is 14, 14 Thomas Sommer. VerallgemeinerteFunktionen. unpublished manuscript,2012
F [Hε(x)] =F [H(x)e−εx ] = Hε(k)
=∫ ∞
−∞Hε(x)e−i kx d x
=∫ ∞
−∞H(x)e−εx e−i kx d x
=∫ ∞
−∞H(x)e−i kx−(−i 2)εx d x
=∫ ∞
−∞H(x)e−i kx+i 2εx d x
=∫ ∞
−∞H(x)e−i (k−iε)x d x
=∫ ∞
0e−i (k−iε)x d x
=[−e−i (k−iε)x
i (k − iε)
]∣∣∣∣∣∞
x=0
=[−e−i k e−ε)x
i (k − iε)
]∣∣∣∣∣∞
x=0
=[−e−i k∞e−ε∞
i (k − iε)
]−
[−e−i k0e−ε0
i (k − iε)
]
= 0− (−1)
i (k − iε)=− i
(k − iε).
(10.124)
By using Sokhotsky’s formula (10.105) we conclude that
F [H(x)] =F [H0+ (x)] = limε→0+
F [Hε(x)] =πδ(k)− iP
(1
k
). (10.125)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 175
10.13 The sign function
10.13.1 Definition
The sign function is defined by
sgn(x −x0) =
−1 for x < x0
0 for x = x0
+1 for x > x0
. (10.126)
It is plotted in Fig. 10.5.
x
b
r
bsgn(x)
Figure 10.5: Plot of the sign functionsgn(x).
10.13.2 Connection to the Heaviside function
In terms of the Heaviside step function, in particular, with H(0) = 12 as in
Eq. (10.107), the sign function can be written by “stretching” the former
(the Heaviside step function) by a factor of two, and shifting it by one
negative unit as follows
sgn(x −x0) = 2H(x −x0)−1,
H(x −x0) = 1
2
[sgn(x −x0)+1
];
and also
sgn(x −x0) = H(x −x0)−H(x0 −x).
(10.127)
10.13.3 Sign sequence
The sequence of functions
sgnn(x −x0) =
−e−xn for x < x0
+e−xn for x > x0
(10.128)
is a limiting sequence of sgn(x −x0)x 6=x0= limn→∞ sgnn(x −x0).
Note (without proof) that
sgn(x) = 4
π
∞∑n=0
sin[(2n +1)x]
(2n +1)(10.129)
= 4
π
∞∑n=0
(−1)n cos[(2n +1)(x −π/2)]
(2n +1), −π< x <π. (10.130)
10.14 Absolute value function (or modulus)
10.14.1 Definition
The absolute value (or modulus) of x is defined by
|x −x0| =
x −x0 for x > x0
0 for x = x0
x0 −x for x < x0
(10.131)
It is plotted in Fig. 10.6.
x
@@@
@@@
|x|
Figure 10.6: Plot of the absolute value |x|.
176 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
10.14.2 Connection of absolute value with sign and Heaviside functions
Its relationship to the sign function is twofold: on the one hand, there is
|x| = x sgn(x), (10.132)
and thus, for x 6= 0,
sgn(x) = |x|x
= x
|x| . (10.133)
On the other hand, the derivative of the absolute value function is the
sign function, at least up to a singular point at x = 0, and thus the absolute
value function can be interpreted as the integral of the sign function (in the
distributional sense); that is,
d |x|d x
=
1 for x > 0
undefined for x = 0
−1 for x < 0
x 6=0= sgn(x);
|x| =∫
sgn(x)d x.
(10.134)
10.14.3 Fourier transform of sgn
Since the Fourier transform is linear, we may use the connection between
the sign and the Heaviside functions sgn(x) = 2H(x)− 1, Eq. (10.127),
together with the Fourier transform of the Heaviside function F [H(x)] =πδ(k)− iP
( 1k
), Eq. (10.125) and the Dirac delta function F [1] = 2πδ(k), Eq.
(10.81), to compose and com pute the Fourier transform of sgn:
F [sgn(x)] =F [2H(x)−1] = 2F [H(x)]−F [1]
= 2
[πδ(k)− iP
(1
k
)]−2πδ(k)
=−2iP
(1
k
).
(10.135)
10.15 Some examples
Let us compute some concrete examples related to distributions.
1. For a start, let us prove that
limε→0
εsin2 xε
πx2 = δ(x). (10.136)
As a hint, take∫ +∞−∞
sin2 xx2 d x =π.
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 177
Let us prove this conjecture by integrating over a good test function ϕ
1
πlimε→0
+∞∫−∞
εsin2( xε
)x2 ϕ(x)d x
[variable substitution y = x
ε,
d y
d x= 1
ε,d x = εd y]
= 1
πlimε→0
+∞∫−∞
ϕ(εy)ε2 sin2(y)
ε2 y2 d y
= 1
πϕ(0)
+∞∫−∞
sin2(y)
y2 d y
=ϕ(0).
(10.137)
Hence we can identify
limε→0
εsin2( xε
)πx2 = δ(x). (10.138)
2. In order to prove that 1π
ne−x2
1+n2x2 is a δ-sequence we proceed again by
integrating over a good test function ϕ, and with the hint that+∞∫−∞
d x/(1+x2) =π we obtain
limn→∞
1
π
+∞∫−∞
ne−x2
1+n2x2ϕ(x)d x
[variable substitution y = xn, x = y
n,
d y
d x= n,d x = d y
n]
= limn→∞
1
π
+∞∫−∞
ne−( y
n
)2
1+ y2 ϕ( y
n
) d y
n
= 1
π
+∞∫−∞
limn→∞
[e−
( yn
)2
ϕ( y
n
)] 1
1+ y2 d y
= 1
π
+∞∫−∞
[e0ϕ (0)
] 1
1+ y2 d y
= ϕ (0)
π
+∞∫−∞
1
1+ y2 d y
= ϕ (0)
ππ
=ϕ (0) .
(10.139)
Hence we can identify
limn→∞
1
π
ne−x2
1+n2x2 = δ(x). (10.140)
178 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
3. Let us prove that xnδ(n)(x) = Cδ(x) and determine the constant C . We
proceed again by integrating over a good test function ϕ. First note that
if ϕ(x) is a good test function, then so is xnϕ(x).∫d xxnδ(n)(x)ϕ(x) =
∫d xδ(n)(x)
[xnϕ(x)
]== (−1)n
∫d xδ(x)
[xnϕ(x)
](n) =
= (−1)n∫
d xδ(x)[nxn−1ϕ(x)+xnϕ′(x)
](n−1) =·· ·
= (−1)n∫
d xδ(x)
[n∑
k=0
(n
k
)(xn)(k)ϕ(n−k)(x)
]=
= (−1)n∫
d xδ(x)[n!ϕ(x)+n ·n!xϕ′(x)+·· ·+xnϕ(n)(x)
]== (−1)nn!
∫d xδ(x)ϕ(x);
hence, C = (−1)nn!. Note that ϕ(x) is a good test function then so is
xnϕ(x).
4. Let us simplify∫ ∞−∞δ(x2 −a2)g (x) d x. First recall Eq. (10.64) stating that
δ( f (x)) =∑i
δ(x −xi )
| f ′(xi )| ,
whenever xi are simple roots of f (x), and f ′(xi ) 6= 0. In our case, f (x) =x2 −a2 = (x −a)(x +a), and the roots are x =±a. Furthermore,
f ′(x) = (x −a)+ (x +a);
hence
| f ′(a)| = 2|a|, | f ′(−a)| = |−2a| = 2|a|.As a result,
δ(x2 −a2) = δ((x −a)(x +a)
)= 1
|2a|(δ(x −a)+δ(x +a)
).
Taking this into account we finally obtain
+∞∫−∞
δ(x2 −a2)g (x)d x
=+∞∫
−∞
δ(x −a)+δ(x +a)
2|a| g (x)d x
= g (a)+ g (−a)
2|a| .
(10.141)
5. Let us evaluate
I =∫ ∞
−∞
∫ ∞
−∞
∫ ∞
−∞δ(x2
1 +x22 +x2
3 −R2)d 3x (10.142)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 179
for R ∈R,R > 0. We may, of course, remain tn the standard Cartesian co-
ordinate system and evaluate the integral by “brute force.” Alternatively,
a more elegant way is to use the spherical symmetry of the problem and
use spherical coordinates r ,Ω(θ,ϕ) by rewriting I into
I =∫
r ,Ωr 2δ(r 2 −R2)dΩdr . (10.143)
As the integral kernel δ(r 2 −R2) just depends on the radial coordinate
r the angular coordinates just integrate to 4π. Next we make use of Eq.
(10.64), eliminate the solution for r =−R, and obtain
I = 4π∫ ∞
0r 2δ(r 2 −R2)dr
= 4π∫ ∞
0r 2 δ(r +R)+δ(r −R)
2Rdr
= 4π∫ ∞
0r 2 δ(r −R)
2Rdr
= 2πR.
(10.144)
6. Let us compute
∫ ∞
−∞
∫ ∞
−∞δ(x3 − y2 +2y)δ(x + y)H(y −x −6) f (x, y)d x d y . (10.145)
First, in dealing with δ(x + y), we evaluate the y integration at x =−y or
y =−x:
∞∫−∞
δ(x3 −x2 −2x)H(−2x −6) f (x,−x)d x
Use of Eq. (10.64)
δ( f (x)) =∑i
1
| f ′(xi )|δ(x −xi ),
at the roots
x1 = 0
x2,3 = 1±p1+8
2= 1±3
2=
2
−1
of the argument f (x) = x3 − x2 −2x = x(x2 − x −2) = x(x −2)(x +1) of the
remaining δ-function, together with
f ′(x) = d
d x(x3 −x2 −2x) = 3x2 −2x −2;
180 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
yields
∞∫−∞
d xδ(x)+δ(x −2)+δ(x +1)
|3x2 −2x −2| H(−2x −6) f (x,−x) =
= 1
|−2| H(−6)︸ ︷︷ ︸= 0
f (0,−0)+ 1
|12−4−2| H(−4−6)︸ ︷︷ ︸= 0
f (2,−2)+
+ 1
|3+2−2| H(2−6)︸ ︷︷ ︸= 0
f (−1,1)
= 0
7. When simplifying derivatives of generalized functions it is always use-
ful to evaluate their properties – such as xδ(x) = 0, f (x)δ(x − x0) =f (x0)δ(x − x0), or δ(−x) = δ(x) – first and before proceeding with the
next differentiation or evaluation. We shall present some applications of
this “rule” next.
First, simplify (d
d x−ω
)H(x)eωx (10.146)
as follows
d
d x
[H(x)eωx]−ωH(x)eωx
= δ(x)eωx +ωH(x)eωx −ωH(x)eωx
= δ(x)e0
= δ(x)
(10.147)
8. Next, simplify (d 2
d x2 +ω2)
1
ωH(x)sin(ωx) (10.148)
as follows
d 2
d x2
[1
ωH(x)sin(ωx)
]+ωH(x)sin(ωx)
= 1
ω
d
d x
[δ(x)sin(ωx)︸ ︷︷ ︸
= 0
+ωH(x)cos(ωx)]+ωH(x)sin(ωx)
= 1
ω
[ωδ(x)cos(ωx)︸ ︷︷ ︸
δ(x)
−ω2H(x)sin(ωx)]+ωH(x)sin(ωx) = δ(x)
(10.149)
9. Let us compute the nth derivative of
f (x) =
0 for x < 0,
x for 0 ≤ x ≤ 1,
0 for x > 1.
(10.150)
D I S T R I BU T I O N S A S G E N E R A L I Z E D F U N C T I O N S 181
`pp
10x
f (x)
``p p
10x
f1(x)
f1(x) = H(x)−H(x −1)= H(x)H(1−x)
f2(x)
x
f2(x) = x
Figure 10.7: Composition of f (x)
As depicted in Fig. 10.7, f can be composed from two functions f (x) =f2(x) · f1(x); and this composition can be done in at least two ways.
Decomposition (i)
f (x) = x[H(x)−H(x −1)
]= xH(x)−xH(x −1)
f ′(x) = H(x)+xδ(x)−H(x −1)−xδ(x −1)
Because of xδ(x −a) = aδ(x −a),
f ′(x) = H(x)−H(x −1)−δ(x −1)
f ′′(x) = δ(x)−δ(x −1)−δ′(x −1)
and hence by induction
f (n)(x) = δ(n−2)(x)−δ(n−2)(x −1)−δ(n−1)(x −1)
for n > 1.
Decomposition (ii)
f (x) = xH(x)H(1−x)
f ′(x) = H(x)H(1−x)+xδ(x)︸ ︷︷ ︸= 0
H(1−x)+xH(x)(−1)δ(1−x)︸ ︷︷ ︸−H(x)δ(1−x)
=
= H(x)H(1−x)−δ(1−x) = [δ(x) = δ(−x)] = H(x)H(1−x)−δ(x −1)
f ′′(x) = δ(x)H(1−x)︸ ︷︷ ︸= δ(x)
+ (−1)H(x)δ(1−x)︸ ︷︷ ︸−δ(1−x)
−δ′(x −1) =
= δ(x)−δ(x −1)−δ′(x −1)
and hence by induction
f (n)(x) = δ(n−2)(x)−δ(n−2)(x −1)−δ(n−1)(x −1)
for n > 1.
10. Let us compute the nth derivative of
f (x) =|sin x| for −π≤ x ≤π,
0 for |x| >π.(10.151)
f (x) = |sin x|H(π+x)H(π−x)
|sin x| = sin x sgn(sin x) = sin x sgn x für −π< x <π;
hence we start from
f (x) = sin x sgn xH(π+x)H(π−x),
182 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Note that
sgn x = H(x)−H(−x),
( sgn x)′ = H ′(x)−H ′(−x)(−1) = δ(x)+δ(−x) = δ(x)+δ(x) = 2δ(x).
f ′(x) = cos x sgn xH(π+x)H(π−x)+ sin x2δ(x)H(π+x)H(π−x)++sin x sgn xδ(π+x)H(π−x)+ sin x sgn xH(π+x)δ(π−x)(−1) =
= cos x sgn xH(π+x)H(π−x)
f ′′(x) = −sin x sgn xH(π+x)H(π−x)+cos x2δ(x)H(π+x)H(π−x)++cos x sgn xδ(π+x)H(π−x)+cos x sgn xH(π+x)δ(π−x)(−1) =
= −sin x sgn xH(π+x)H(π−x)+2δ(x)+δ(π+x)+δ(π−x)
f ′′′(x) = −cos x sgn xH(π+x)H(π−x)− sin x2δ(x)H(π+x)H(π−x)−−sin x sgn xδ(π+x)H(π−x)− sin x sgn xH(π+x)δ(π−x)(−1)++2δ′(x)+δ′(π+x)−δ′(π−x) =
= −cos x sgn xH(π+x)H(π−x)+2δ′(x)+δ′(π+x)−δ′(π−x)
f (4)(x) = sin x sgn xH(π+x)H(π−x)−cos x2δ(x)H(π+x)H(π−x)−−cos x sgn xδ(π+x)H(π−x)−cos x sgn xH(π+x)δ(π−x)(−1)++2δ′′(x)+δ′′(π+x)+δ′′(π−x) =
= sin x sgn xH(π+x)H(π−x)−2δ(x)−δ(π+x)−δ(π−x)++2δ′′(x)+δ′′(π+x)+δ′′(π−x);
hence
f (4) = f (x)−2δ(x)+2δ′′(x)−δ(π+x)+δ′′(π+x)−δ(π−x)+δ′′(π−x),
f (5) = f ′(x)−2δ′(x)+2δ′′′(x)−δ′(π+x)+δ′′′(π+x)+δ′(π−x)−δ′′′(π−x);
and thus by induction
f (n) = f (n−4)(x)−2δ(n−4)(x)+2δ(n−2)(x)−δ(n−4)(π+x)++δ(n−2)(π+x)+ (−1)n−1δ(n−4)(π−x)+ (−1)nδ(n−2)(π−x)
(n = 4,5,6, . . . )
b
11
Green’s function
This chapter marks the beginning of a series of chapters dealing with
the solution to differential equations of theoretical physics. Very often,
these differential equations are linear; that is, the “sought after” function
Ψ(x), y(x),φ(t ) et cetera occur only as a polynomial of degree zero and one,
and not of any higher degree, such as, for instance, [y(x)]2.
11.1 Elegant way to solve linear differential equations
Green’s function present a very elegant way of solving linear differential
equations of the form
Lx y(x) = f (x), with the differential operator
Lx = an(x)d n
d xn +an−1(x)d n−1
d xn−1 + . . .+a1(x)d
d x+a0(x)
=n∑
j=0a j (x)
d j
d x j,
(11.1)
where ai (x), 0 ≤ i ≤ n are functions of x. The idea is quite straightfor-
ward: if we are able to obtain the “inverse” G of the differential operator L
defined by
LxG(x, x ′) = δ(x −x ′), (11.2)
with δ representing Dirac’s delta function, then the solution to the inho-
mogenuous differential equation (11.1) can be obtained by integrating
G(x −x ′) alongside with the inhomogenuous term f (x ′); that is,
y(x) =∫ ∞
−∞G(x, x ′) f (x ′)d x ′. (11.3)
184 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
This claim, as posted in Eq. (11.3), can be verified by explicitly applying the
differential operator Lx to the solution y(x),
Lx y(x)
=Lx
∫ ∞
−∞G(x, x ′) f (x ′)d x ′
=∫ ∞
−∞LxG(x, x ′) f (x ′)d x ′
=∫ ∞
−∞δ(x −x ′) f (x ′)d x ′
= f (x).
(11.4)
Let us check whether G(x, x ′) = H(x−x ′)sinh(x−x ′) is a Green’s function
of the differential operator Lx = d 2
d x2 −1. In this case, all we have to do is to
verify that Lx , applied to G(x, x ′), actually renders δ(x − x ′), as required by
Eq. (11.2).
LxG(x, x ′) = δ(x −x ′)(d 2
d x2 −1
)H(x −x ′)sinh(x −x ′) ?= δ(x −x ′)
Note thatd
d xsinh x = cosh x,
d
d xcosh x = sinh x; and hence
d
d x
δ(x −x ′)sinh(x −x ′)︸ ︷︷ ︸= 0
+H(x −x ′)cosh(x −x ′)
−H(x −x ′)sinh(x −x ′) =
δ(x−x ′)cosh(x−x ′)+H(x−x ′)sinh(x−x ′)−H(x−x ′)sinh(x−x ′) = δ(x−x ′).
The solution (11.4) so obtained is not unique, as it is only a special solu-
tion to the inhomogenuous equation (11.1). The general solution to (11.1)
can be found by adding the general solution y0(x) of the corresponding
homogenuous differential equation
Lx y(x) = 0 (11.5)
to one special solution – say, the one obtained in Eq. (11.4) through Green’s
function techniques.
Indeed, the most general solution
Y (x) = y(x)+ y0(x) (11.6)
clearly is a solution of the inhomogenuous differential equation (11.4), as
Lx Y (x) =Lx y(x)+Lx y0(x) = f (x)+0 = f (x). (11.7)
Conversely, any two distinct special solutions y1(x) and y2(x) of the in-
homogenuous differential equation (11.4) differ only by a function which is
G R E E N ’ S F U N C T I O N 185
a solution tho the homogenuous differential equation (11.5), because due
to linearity of Lx , their difference yd (x) = y1(x)− y2(x) can be parameter-
ized by some function in y0
Lx [y1(x)− y2(x)] =Lx y1(x)+Lx y2(x) = f (x)− f (x) = 0. (11.8)
From now on, we assume that the coefficients a j (x) = a j in Eq. (11.1)
are constants, and thus translational invariant. Then the entire Ansatz
(11.2) for G(x, x ′) is translation invariant, because derivatives are defined
only by relative distances, and δ(x − x ′) is translation invariant for the same
reason. Hence,
G(x, x ′) =G(x −x ′). (11.9)
For such translation invariant systems, the Fourier analysis represents an
excellent way of analyzing the situation.
Let us see why translanslation invariance of the coefficients a j (x) =a j (x + ξ) = a j under the translation x → x + ξ with arbitrary ξ – that is,
independence of the coefficients a j on the “coordinate” or “parameter”
x – and thus of the Green’s function, implies a simple form of the latter.
Translanslation invariance of the Green’s function really means
G(x +ξ, x ′+ξ) =G(x, x ′). (11.10)
Now set ξ=−x ′; then we can define a new green’s functions which just de-
pends on one argument (instead of previously two), which is the difference
of the old arguments
G(x −x ′, x ′−x ′) =G(x −x ′,0) →G(x −x ′). (11.11)
What is important for applications is the possibility to adapt the solu-
tions of some inhomogenuous differential equation to boundary and initial
value problems. In particular, a properly chosen G(x − x ′), in its depen-
dence on the parameter x, “inherits” some behaviour of the solution y(x).
Suppose, for instance, we would like to find solutions with y(xi ) = 0 for
some parameter values xi , i = 1, . . . ,k. Then, the Green’s function G must
vanish there also
G(xi −x ′) = 0 for i = 1, . . . ,k. (11.12)
11.2 Finding Green’s functions by spectral decompositions
It has been mentioned earlier (cf. Section 10.6.5 on page 164) that the δ-
function can be expressed in terms of various eigenfunction expansions.
We shall make use of these expansions here 1. 1 Dean G. Duffy. Green’s Functions withApplications. Chapman and Hall/CRC,Boca Raton, 2001
Suppose ψi (x) are eigenfunctions of the differential operator Lx , and λi
are the associated eigenvalues; that is,
Lxψi (x) =λiψi (x). (11.13)
186 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Suppose further that Lx is of degree n, and therefore (we assume with-
out proof) that we know all (a complete set of) the n eigenfunctions
ψ1(x),ψ2(x), . . . ,ψn(x) of Lx . In this case, orthogonality of the system of
eigenfunctions holds, such that∫ ∞
−∞ψi (x)ψ j (x)d x = δi j , (11.14)
as well as completeness, such that,
n∑i=1
ψi (x)ψi (x ′) = δ(x −x ′). (11.15)
ψi (x ′) stands for the complex conjugate of ψi (x ′). The sum in Eq. (11.15)
stands for an integral in the case of continuous spectrum of Lx . In this
case, the Kronecker δi j in (11.14) is replaced by the Dirac delta function
δ(k − k ′). It has been mentioned earlier that the δ-function can be ex-
pressed in terms of various eigenfunction expansions.
The Green’s function of Lx can be written as the spectral sum of the
absolute squares of the eigenfunctions, divided by the eigenvalues λ j ; that
is,
G(x −x ′) =n∑
j=1
ψ j (x)ψ j (x ′)λ j
. (11.16)
For the sake of proof, apply the differential operator Lx to the Green’s
function Ansatz G of Eq. (11.16) and verify that it satisfies Eq. (11.2):
LxG(x −x ′)
=Lx
n∑j=1
ψ j (x)ψ j (x ′)λ j
=n∑
j=1
[Lxψ j (x)]ψ j (x ′)λ j
=n∑
j=1
[λ jψ j (x)]ψ j (x ′)λ j
=n∑
j=1ψ j (x)ψ j (x ′)
= δ(x −x ′).
(11.17)
For a demonstration of completeness of systems of eigenfunctions, con-
sider, for instance, the differential equation corresponding to the harmonic
vibration [please do not confuse this with the harmonic oscillator (9.29)]
Ltψ= d 2
d t 2ψ= k2, (11.18)
with k ∈R.
Without any boundary conditions the associated eigenfunctions are
ψω(t ) = e−iωt , (11.19)
G R E E N ’ S F U N C T I O N 187
with 0 ≤ ω ≤∞, and with eigenvalue −ω2. Taking the complex conjugate
and integrating over ω yields [modulo a constant factor which depends on
the choice of Fourier transform parameters; see also Eq. (10.81)]∫ ∞
−∞ψω(t )ψω(t ′)dω
=∫ ∞
−∞e−iωt e iωt ′dω
=∫ ∞
−∞e−iω(t−t ′)dω
= δ(t − t ′).
(11.20)
The associated Green’s function is
G(t − t ′) =∫ ∞
−∞e−iω(t−t ′)
(−ω2)dω. (11.21)
And the solution is obtained by integrating over the constant k2; that is,
ψ(t ) =∫ ∞
−∞G(t − t ′)k2d t ′ =−
∫ ∞
−∞
(k
ω
)2
e−iω(t−t ′)dωd t ′. (11.22)
Note that if we are imposing boundary conditions; e.g., ψ(0) = ψ(L) = 0,
representing a string “fastened” at positions 0 and L, the eigenfunctions
change to
ψk (t ) = sin(ωn t ) = sin(nπ
Lt)
, (11.23)
with ωn = nπL and n ∈Z. We can deduce orthogonality and completeness by
listening to the orthogonality relations for sines (9.11).
For the sake of another example suppose, from the Euler-Bernoulli
bending theory, we know (no proof is given here) that the equation for the
quasistatic bending of slender, isotropic, homogeneous beams of constant
cross-section under an applied transverse load q(x) is given by
Lx y(x) = d 4
d x4 y(x) = q(x) ≈ c, (11.24)
with constant c ∈R. Let us further assume the boundary conditions
y(0) = y(L) = d 2
d x2 y(0) = d 2
d x2 y(L) = 0. (11.25)
Also, we require that y(x) vanishes everywhere except inbetween 0 and L;
that is, y(x) = 0 for x = (−∞,0) and for x = (l ,∞). Then in accordance with
these boundary conditions, the system of eigenfunctions ψ j (x) of Lx can
be written as
ψ j (x) =√
2
Lsin
(π j x
L
)(11.26)
for j = 1,2, . . .. The associated eigenvalues
λ j =(π j
L
)4
188 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
can be verified through explicit differentiation
Lxψ j (x) =Lx
√2
Lsin
(π j x
L
)=Lx
√2
Lsin
(π j x
L
)=
(π j
L
)4√
2
Lsin
(π j x
L
)=
(π j
L
)4
ψ j (x).
(11.27)
The cosine functions which are also solutions of the Euler-Bernoulli equa-
tions (11.24) do not vanish at the origin x = 0.
Hence,
G(x −x ′)(x) = 2
L
∞∑j=1
sin(π j x
L
)sin
(π j x′
L
)(π jL
)4
= 2L3
π4
∞∑j=1
1
j 4 sin
(π j x
L
)sin
(π j x ′
L
) (11.28)
Finally we are in a good shape to calculate the solution explicitly by
y(x) =∫ L
0G(x −x ′)g (x ′)d x ′
≈∫ L
0c
[2L3
π4
∞∑j=1
1
j 4 sin
(π j x
L
)sin
(π j x ′
L
)]d x ′
≈ 2cL3
π4
∞∑j=1
1
j 4 sin
(π j x
L
)[∫ L
0sin
(π j x ′
L
)d x ′
]
≈ 4cL4
π5
∞∑j=1
1
j 5 sin
(π j x
L
)sin2
(π j
2
)(11.29)
11.3 Finding Green’s functions by Fourier analysis
If one is dealing with translation invariant systems of the form
Lx y(x) = f (x), with the differential operator
Lx = and n
d xn +an−1d n−1
d xn−1 + . . .+a1d
d x+a0
=n∑
j=0a j (x)
d j
d x j,
(11.30)
with constant coefficients a j , then we can apply the following strategy
using Fourier analysis to obtain the Green’s function.
G R E E N ’ S F U N C T I O N 189
First, recall that, by Eq. (10.80) on page 164 the Fourier transform of the
delta function δ(k) = 1 is just a constant; with our definition unity. Then, δ
can be written as
δ(x −x ′) = 1
2π
∫ ∞
−∞e i k(x−x′)dk (11.31)
Next, consider the Fourier transform of the Green’s function
G(k) =∫ ∞
−∞G(x)e−i kx d x (11.32)
and its back trasform
G(x) = 1
2π
∫ ∞
−∞G(k)e i kx dk. (11.33)
Insertion of Eq. (11.33) into the Ansatz LxG(x −x ′) = δ(x −x ′) yields
LxG(x)
=Lx1
2π
∫ ∞
−∞G(k)e i kx dk
= 1
2π
∫ ∞
−∞G(k)
(Lx e i kx
)dk = 1
2π
∫ ∞
−∞e i kx dk.
(11.34)
and thus, through comparison of the integral kernels,
1
2π
∫ ∞
−∞[G(k)Lx −1
]e i kx dk = 0,
G(k)Lk −1 = 0,
G(k) = (Lk )−1 ,
(11.35)
where Lk is obtained from Lx by substituting every derivative dd x in the
latter by i k in the former. in that way, the Fourier transform G(k) is ob-
tained as a polynomial of degree n, the same degree as the highest order of
derivative in Lx .
In order to obtain the Green’s function G(x), and to be able to integrate
over it with the inhomogenuous term f (x), we have to Fourier transform
G(k) back to G(x).
Then we have to make sure that the solution obeys the initial con-
ditions, and, if necessary, we have to add solutions of the homogenuos
equation LxG(x −x ′) = 0. That is all.
Let us consider a few examples for this procedure.
1. First, let us solve the differential operator y ′ − y = t on the intervall
[0,∞) with the boundary conditions y(0) = 0.
We observe that the associated differential operator is given by
Lt = d
d t−1,
and the inhomogenuous term can be identified with f (t ) = t .
190 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
We use the Ansatz G1(t , t ′) = 12π
+∞∫−∞
G1(k)e i k(t−t ′)dk; hence
Lt G1(t , t ′) = 1
2π
+∞∫−∞
G1(k)
(d
d t−1
)e i k(t−t ′)︸ ︷︷ ︸
= (i k −1)e i k(t−t ′)
dk =
= δ(t − t ′) = 1
2π
+∞∫−∞
e i k(t−t ′)dk
Now compare the kernels of the Fourier integrals of Lt G1 and δ:
G1(k)(i k −1) = 1 =⇒ G1(k) = 1
i k −1= 1
i (k + i )
G1(t , t ′) = 1
2π
+∞∫−∞
e i k(t−t ′)
i (k + i )dk
The paths in the upper and lower integration plain are drawn in Frig.
11.1.
Re k
Im k
× −i
region t − t ′ > 0
region t − t ′ < 0
∨∧ > > -
6
Figure 11.1: Plot of the two paths reqiredfor solving the Fourier integral.
The “closures” throught the respective half-circle paths vanish.
residuum theorem: G1(t , t ′) = 0 for t > t ′
G1(t , t ′) = −2πi Res
(1
2πi
e i k(t−t ′)
(k + i );−i
)=
= −e t−t ′ for t < t ′.
Hence we obtain a Green’s function for the inhomogenuous differential
equation
G1(t , t ′) =−H(t ′− t )e t−t ′
However, this Green’s function and its associated (special) solution
does not obey the boundary conditions G1(0, t ′) = −H(t ′)e−t ′ 6= 0 for
t ′ ∈ [0,∞).
Therefore, we have to fit the Green’s function by adding an appropri-
ately weighted solution to the homogenuos differential equation. The
homogenuous Green’s function is found by
Lt G0(t , t ′) = 0,
and thus, in particular,
d
d tG0 =G0 =⇒G0 = ae t−t ′ .
with the Ansatz
G(0, t ′) =G1(0, t ′)+G0(0, t ′; a) =−H(t ′)e−t ′ +ae−t ′
G R E E N ’ S F U N C T I O N 191
for the general solution we can choose the constant coefficient a so that
G(0, t ′) =G1(0, t ′)+G0(0, t ′; a) =−H(t ′)e−t ′ +ae−t ′ = 0
For a = 1, the Green’s function and thus the solution obeys the boundary
value conditions; that is,
G(t , t ′) = [1−H(t ′− t )
]e t−t ′ .
Since H(−x) = 1−H(x), G(t , t ′) can be rewritten as
G(t , t ′) = H(t − t ′)e t−t ′ .
In the final step we obtain the solution through integration of G over the
inhomogenuous term t :
y(t ) =∞∫
0
H(t − t ′)︸ ︷︷ ︸= 1 for t ′ < t
e t−t ′ t ′d t ′ =
=t∫
0
e t−t ′ t ′d t ′ = e t
t∫0
t ′e−t ′d t ′ =
= e t
−t ′e−t ′∣∣∣t
0−
t∫0
(−e−t ′ )d t ′=
= e t[
(−te−t )−e−t ′∣∣∣t
0
]= e t (−te−t −e−t +1
)= e t −1− t .
2. Next, let us solve the differential equation d 2 yd t 2 + y = cos t on the intervall
t ∈ [0,∞) with the boundary conditions y(0) = y ′(0) = 0.
First, observe that
L = d 2
d t 2 +1.
The Fourier Ansatz for the Green’s function is
G1(t , t ′) = 1
2π
+∞∫−∞
G(k)e i k(t−t ′)dk
L G1 = 1
2π
+∞∫−∞
G(k)
(d 2
d t 2 +1
)e i k(t−t ′)dk =
= 1
2π
+∞∫−∞
G(k)((i k)2 +1)e i k(t−t ′)dk =
= δ(t − t ′) = 1
2π
+∞∫−∞
e i k(t−t ′)dk =
Hence
G(k)(1−k2) = 1
192 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
ant thus
G(k) = 1
(1−k2)= −1
(k +1)(k −1)
The Fourier transformation is
G1(t , t ′) = − 1
2π
+∞∫−∞
e i k(t−t ′)
(k +1)(k −1)dk =
= − 1
2π2πi
[Res
(e i k(t−t ′)
(k +1)(k −1);k = 1
)+
Res
(e i k(t−t ′)
(k +1)(k −1);k =−1
)]H(t − t ′)
The path in the upper integration plain is drawn in Fig. 11.2.Re k
Im k
× ×> >
-
6
Figure 11.2: Plot of the path reqired forsolving the Fourier integral.
G1(t , t ′) = − i
2
(e i (t−t ′) −e−i (t−t ′)
)H(t − t ′) =
= e i (t−t ′) −e−i (t−t ′)
2iH(t − t ′) = sin(t − t ′)H(t − t ′)
G1(0, t ′) = sin(−t ′)H(−t ′) = 0 since t ′ > 0
G ′1(t , t ′) = cos(t − t ′)H(t − t ′)+ sin(t − t ′)δ(t − t ′)︸ ︷︷ ︸
= 0G ′
1(0, t ′) = cos(−t ′)H(−t ′) = 0 as before.
G1 already satisfies the boundary conditions; hence we do not need to
find the Green’s function G0 of the homogenuous equation.
y(t ) =∞∫
0
G(t , t ′) f (t ′)d t ′ =∞∫
0
sin(t − t ′) H(t − t ′)︸ ︷︷ ︸= 1 for t > t ′
cos t ′d t ′ =
=t∫
0
sin(t − t ′)cos t ′d t ′ =t∫
0
(sin t cos t ′−cos t sin t ′)cos t ′d t ′ =
=t∫
0
[sin t (cos t ′)2 −cos t sin t ′ cos t ′
]d t ′ =
= sin t
t∫0
(cos t ′)2d t ′−cos t
t∫0
si nt ′ cos t ′d t ′ =
= sin t
[1
2(t ′+ sin t ′ cos t ′)
]∣∣∣t
0−cos t
[sin2 t ′
2
]∣∣∣t
0=
= t sin t
2+ sin2 t cos t
2− cos t sin2 t
2= t sin t
2.
c
Part IV:
Differential equations
12
Sturm-Liouville theory
This is only a very brief “dive into Sturm-Liouville theory,” which has many
fascinating aspects and connections to Fourier analysis, the special func-
tions of mathematical physics, operator theory, and linear algebra 1. In 1 Garrett Birkhoff and Gian-Carlo Rota.Ordinary Differential Equations. JohnWiley & Sons, New York, Chichester,Brisbane, Toronto, fourth edition, 1959,1960, 1962, 1969, 1978, and 1989; M. A.Al-Gwaiz. Sturm-Liouville Theory andits Applications. Springer, London, 2008;and William Norrie Everitt. A catalogue ofSturm-Liouville differential equations. InWerner O. Amrein, Andreas M. Hinz, andDavid B. Pearson, editors, Sturm-LiouvilleTheory, Past and Present, pages 271–331.Birkhäuser Verlag, Basel, 2005. URLhttp://www.math.niu.edu/SL2/papers/
birk0.pdf
physics, many formalizations involve second order differential equations,
which, in their most general form, can be written as 2
2 Russell Herman. A Second Course in Or-dinary Differential Equations: DynamicalSystems and Boundary Value Problems.University of North Carolina Wilming-ton, Wilmington, NC, 2008. URL http:
//people.uncw.edu/hermanr/mat463/
ODEBook/Book/ODE_LargeFont.pdf.Creative Commons Attribution-NoncommercialShare Alike 3.0 UnitedStates License
Lx y(x) = a0(x)y(x)+a1(x)d
d xy(x)+a2(x)
d 2
d x2 y(x) = f (x). (12.1)
The differential operator is defined by
Lx = a0(x)+a1(x)d
d x+a2(x)
d 2
d x2 . (12.2)
The solutions y(x) are often subject to boundary conditions of various
forms.
Dirichlet boundary conditions are of the form y(a) = y(b) = 0 for some
a,b.
Neumann boundary conditions are of the form y ′(a) = y ′(b) = 0 for some
a,b.
Periodic boundary conditions are of the form y(a) = y(b) and y ′(a) =y ′(b) for some a,b.
12.1 Sturm-Liouville form
Any second order differential equation of the general form (12.1) can be
rewritten into a differential equation of the Sturm-Liouville form
Sx y(x) = d
d x
[p(x)
d
d x
]y(x)+q(x)y(x) = F (x),
with p(x) = e∫ a1(x)
a2(x) d x,
q(x) = p(x)a0(x)
a2(x)= a0(x)
a2(x)e
∫ a1(x)a2(x) d x
,
F (x) = p(x)f (x)
a2(x)= f (x)
a2(x)e
∫ a1(x)a2(x) d x
(12.3)
196 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
The associated differential operator
Sx = d
d x
[p(x)
d
d x
]+q(x)
= p(x)d 2
d x2 +p ′(x)d
d x+q(x)
(12.4)
is called Sturm-Liouville differential operator.
For a proof, we insert p(x), q(x) and F (x) into the Sturm-Liouville form
of Eq. (12.3) and compare it with Eq. (12.1).
d
d x
[e
∫ a1(x)a2(x) d x d
d x
]+ a0(x)
a2(x)e
∫ a1(x)a2(x) d x
y(x) = f (x)
a2(x)e
∫ a1(x)a2(x) d x
e∫ a1(x)
a2(x) d x
d 2
d x2 + a1(x)
a2(x)
d
d x+ a0(x)
a2(x)
y(x) = f (x)
a2(x)e
∫ a1(x)a2(x) d x
a2(x)
d 2
d x2 +a1(x)d
d x+a0(x)
y(x) = f (x).
(12.5)
12.2 Sturm-Liouville eigenvalue problem
The Sturm-Liouville eigenvalue problem is given by the differential equa-
tion
Sxφ(x) =−λρ(x)φ(x), or
d
d x
[p(x)
d
d x
]φ(x)+ [q(x)+λρ(x)]φ(x) = 0
(12.6)
for x ∈ (a,b) and continuous p(x) > 0, p ′(x), q(x) and ρ(x) > 0.
We mention without proof (for proofs, see, for instance, Ref. 3) that 3 M. A. Al-Gwaiz. Sturm-Liouville Theoryand its Applications. Springer, London,2008• the eigenvalues λ turn out to be real, countable, and ordered, and that
there is a smallest eigenvalue λ1 such that λ1 <λ2 <λ2 < ·· · ;
• for each eigenvalue λ j there exists an eigenfunction φ j (x) with j − 1
zeroes on (a,b);
• eigenfunctions corresponding to different eigenvalues are orthogonal,
and can be normalized, with respect to the weight function ρ(x); that is,
⟨φ j |φk⟩ =∫ b
aφ j (x)φk (x)ρ(x)d x = δ j k (12.7)
• the set of eigenfunctions is complete; that is, any piecewise smooth
function can be represented by
f (x) =∞∑
k=1ckφk (x),
with
ck = ⟨ f |φk⟩⟨φk |φk⟩
= ⟨ f |φk⟩.
(12.8)
S T U R M - L I O U V I L L E T H E O RY 197
• the orthonormal (with respect to the weight ρ) set φ j (x) | j ∈ N is a
basis of a Hilbert space with the inner product
⟨ f | g ⟩ =∫ b
af (x)g (x)ρ(x)d x. (12.9)
12.3 Adjoint and self-adjoint operators
In operator theory, just as in matrix theory, we can define an adjoint oper-
ator via the scalar product defined in Eq. (12.9). In this formalization, the
Sturm-Liouville differential operator S is self-adjoint.
Let us first define the domain of a differential operator L as the set of
all square integrable (with respect to the weight ρ(x)) functions ϕ satisfying
boundary conditions. ∫ b
a|ϕ(x)|2ρ(x)d x <∞. (12.10)
Then, the adjoint operator L † is defined by satisfying
⟨ψ |Lϕ⟩ =∫ b
aψ(x)[Lϕ(x)]ρ(x)d x
= ⟨L †ψ |ϕ⟩ =∫ b
a[L †ψ(x)]ϕ(x)ρ(x)d x
(12.11)
for all ψ(x) in the domain of L † and ϕ(x) in the domain of L .
Note that in the case of second order differential operators in the stan-
dard form (12.2) and with ρ(x) = 1, we can move the differential quotients
and the entire differential operator in
⟨ψ |Lϕ⟩ =∫ b
aψ(x)[Lxϕ(x)]ρ(x)d x
=∫ b
aψ(x)[a2(x)ϕ′′(x)+a1(x)ϕ′(x)+a0(x)ϕ(x)]d x
(12.12)
from ϕ to ψ by one and two partial integrations.
Integrating the kernel a1(x)ϕ′(x) by parts yields∫ b
aψ(x)a1(x)ϕ′(x)d x = ψ(x)a1(x)ϕ(x)
∣∣ba −
∫ b
a(ψ(x)a1(x))′ϕ(x)d x. (12.13)
Integrating the kernel a2(x)ϕ′′(x) by parts twice yields∫ b
aψ(x)a2(x)ϕ′′(x)d x = ψ(x)a2(x)ϕ′(x)
∣∣ba −
∫ b
a(ψ(x)a2(x))′ϕ′(x)d x
= ψ(x)a2(x)ϕ′(x)∣∣b
a − (ψ(x)a2(x))′ϕ(x)∣∣b
a +∫ b
a(ψ(x)a2(x))′′ϕ(x)d x
= ψ(x)a2(x)ϕ′(x)− (ψ(x)a2(x))′ϕ(x)∣∣b
a +∫ b
a(ψ(x)a2(x))′′ϕ(x)d x.
(12.14)
198 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Combining these two calculations yields
⟨ψ |Lϕ⟩ =∫ b
aψ(x)[Lxϕ(x)]ρ(x)d x
=∫ b
aψ(x)[a2(x)ϕ′′(x)+a1(x)ϕ′(x)+a0(x)ϕ(x)]d x
= ψ(x)a1(x)ϕ(x)+ψ(x)a2(x)ϕ′(x)− (ψ(x)a2(x))′ϕ(x)∣∣b
a
+∫ b
a[(a2(x)ψ(x))′′− (a1(x)ψ(x))′+a0(x)ψ(x)]ϕ(x)d x.
(12.15)
If the terms with no integral vanish (because of boundary contitions or
other reasons); that is, if
ψ(x)a1(x)ϕ(x)+ψ(x)a2(x)ϕ′(x)− (ψ(x)a2(x))′ϕ(x)∣∣b
a = 0,
then Eq. (12.15) reduces to
⟨ψ |Lϕ⟩ =∫ b
a[(a2(x)ψ(x))′′− (a1(x)ψ(x))′+a0(x)ψ(x)]ϕ(x)d x, (12.16)
and we can identify the adjoint differential operator Lx with
L †x = d 2
d x2 a2(x)− d
d xa1(x)+a0(x)
= d
d x
[a2(x)
d
d x+a′
2(x)
]−a′
1(x)−a1(x)d
d x+a0(x)
= a′2(x)
d
d x+a2(x)
d 2
d x2 +a′′2 (x)+a′
2(x)d
d x−a′
1(x)−a1(x)d
d x+a0(x)
= a2(x)d 2
d x2 + [2a′2(x)−a1(x)]
d
d x+a′′
2 (x)−a′1(x)+a0(x).
(12.17)
If
L †x =Lx , (12.18)
the operator Lx is called self-adjoint.
In order to prove that the Sturm-Liouville differential operator
S = d
d x
[p(x)
d
d x
]+q(x) = p(x)
d 2
d x2 +p ′(x)d
d x+q(x) (12.19)
from Eq. (12.4) is self-adjoint, we verify Eq. (12.17) with S † taken from Eq.
(12.16). Thereby, we identify a2(x) = p(x), a1(x) = p ′(x), and a0(x) = q(x);
hence
S †x = a2(x)
d 2
d x2 + [2a′2(x)−a1(x)]
d
d x+a′′
2 (x)−a′1(x)+a0(x)
= p(x)d 2
d x2 + [2p ′(x)−p ′(x)]d
d x+p ′′(x)−p ′′(x)+q(x)
= p(x)d 2
d x2 +p ′(x)d
d xq(x)
=Sx .
(12.20)
S T U R M - L I O U V I L L E T H E O RY 199
Alternatively we could argue from Eqs. (12.17) and (12.18), noting that a
differential operator is self-adjoint if and only if
L †x = a2(x)
d 2
d x2 −a1(x)d
d x+a0(x)
=L †x = a2(x)
d 2
d x2 + [2a′2(x)−a1(x)]
d
d x+a′′
2 (x)−a′1(x)+a0(x).
(12.21)
By comparison of the coefficients,
a2(x) = a2(x),
a1(x) = [2a′2(x)−a1(x)],
a0(x) =+a′′2 (x)−a′
1(x)+a0(x),
(12.22)
and hence,
a′2(x) = a1(x), (12.23)
which is exactly the form of the Sturm-Liouville differential operator.
12.4 Sturm-Liouville transformation into Liouville normal form
Let, for x ∈ [a,b],
[Sx +λρ(x)]y(x) = 0,
d
d x
[p(x)
d
d x
]y(x)+ [q(x)+λρ(x)]y(x) = 0,[
p(x)d 2
d x2 +p ′(x)d
d x+q(x)+λρ(x)
]y(x) = 0,[
d 2
d x2 + p ′(x)
p(x)
d
d x+ q(x)+λρ(x)
p(x)
]y(x) = 0
(12.24)
be a second order differential equation of the Sturm-Liouville form 4. 4 Garrett Birkhoff and Gian-Carlo Rota.Ordinary Differential Equations. JohnWiley & Sons, New York, Chichester,Brisbane, Toronto, fourth edition, 1959,1960, 1962, 1969, 1978, and 1989
This equation (12.24) can be written in the Liouville normal form con-
taining no first order differentiation term
− d 2
d t 2 w(t )+ [q(t )−λ]w(t ) = 0, with t ∈ [t (a), t (b)]. (12.25)
It is obtained via the Sturm-Liouville transformation
ξ= t (x) =∫ x
a
√ρ(s)
p(s)d s,
w(t ) = 4√
p(x(t ))ρ(x(t ))y(x(t )),
(12.26)
where
q(t ) = 1
ρ
[−q − 4
ppρ
(p
(1
4p
pρ
)′)′]. (12.27)
The apostrophe represents derivation with respect to x.
200 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For the sake of an example, suppose we want to know the normalized
eigenfunctions of
x2 y ′′+3x y ′+ y =−λy , with x ∈ [1,2] (12.28)
with the boundary conditions y(1) = y(2) = 0.
The first thing we have to do is to transform this differential equation
into its Sturm-Liouville form by identifying a2(x) = x2, a1(x) = 3x, a0 = 1,
ρ = 1 such that f (x) =−λy(x); and hence
p(x) = e∫ 3x
x2 d x = e∫ 3
x d x = e3log x = x3,
q(x) = p(x)1
x2 = x,
F (x) = p(x)λy
(−x2)=−λx y , and hence ρ(x) = x.
(12.29)
As a result we obtain the Sturm-Liouville form
1
x((x3 y ′)′+x y) =−λy . (12.30)
In the next step we apply the Sturm-Liouville transformation
ξ= t (x) =∫ √
ρ(x)
p(x)d x =
∫d x
x= log x,
w(t (x)) = 4√
p(x(t ))ρ(x(t ))y(x(t )) = 4√
x4 y(x(t )) = x y ,
q(t ) = 1
x
[−x − 4
√x4
(x3
(1
4px4
)′)′]= 0.
(12.31)
We now take the Ansatz y = 1x w(t (x)) = 1
x w(log x) and finally obtain the
Liouville normal form
−w ′′ =λw . (12.32)
As an Ansatz for solving the Liouville normal form we use
w(ξ) = a sin(pλξ)+b cos(
pλξ) (12.33)
The boundary conditions translate into x = 1 → ξ = 0, and x = 2 → ξ =log2. From w(0) = 0 we obtain b = 0. From w(log2) = a sin(
pλ log2) = 0 we
obtain√λn log2 = nπ.
Thus the eigenvalues are
λn =(
nπ
log2
)2
. (12.34)
The associated eigenfunctions are
wn(ξ) = a sin
[nπ
log2ξ
], (12.35)
and thus
yn = 1
xa sin
[nπ
log2log x
]. (12.36)
S T U R M - L I O U V I L L E T H E O RY 201
We can check that they are orthonormal by inserting into Eq. (12.7) and
verifying it; that is,
2∫1
ρ(x)yn(x)ym(x)d x = δnm ; (12.37)
more explicitly,
2∫1
d xx
(1
x2
)a2 sin
(nπ
log x
log2
)sin
(mπ
log x
log2
)
[variable substitution u = log x
log2
du
d x= 1
log2
1
x, u = d x
x log2]
=u=1∫
u=0
du log2a2 sin(nπu)sin(mπu)
= a2(
log2
2
)︸ ︷︷ ︸
= 1
2∫ 1
0du sin(nπu)sin(mπu)︸ ︷︷ ︸
= δnm
= δnm .
(12.38)
Finally, with a =√
2log2 we obtain the solution
yn =√
2
log2
1
xsin
(nπ
log x
log2
). (12.39)
12.5 Varieties of Sturm-Liouville differential equations
A catalogue of Sturm-Liouville differential equations comprises the fol-
lowing species, among many others 5. Some of these cases are tabellated 5 George B. Arfken and Hans J. Weber.Mathematical Methods for Physicists.Elsevier, Oxford, 6th edition, 2005. ISBN0-12-059876-0;0-12-088584-0; M. A. Al-Gwaiz. Sturm-Liouville Theory and itsApplications. Springer, London, 2008;and William Norrie Everitt. A catalogue ofSturm-Liouville differential equations. InWerner O. Amrein, Andreas M. Hinz, andDavid B. Pearson, editors, Sturm-LiouvilleTheory, Past and Present, pages 271–331.Birkhäuser Verlag, Basel, 2005. URLhttp://www.math.niu.edu/SL2/papers/
birk0.pdf
as functions p, q , λ and ρ appearing in the general form of the Sturm-
Liouville eigenvalue problem (12.6)
Sxφ(x) =−λρ(x)φ(x), or
d
d x
[p(x)
d
d x
]φ(x)+ [q(x)+λρ(x)]φ(x) = 0
(12.40)
in Table 12.1.
X
202 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Equation p(x) q(x) −λ ρ(x)
Hypergeometric xα+1(1−x)β+1 0 µ xα(1−x)β
Legendre 1−x2 0 l (l +1) 1Shifted Legendre x(1−x) 0 l (l +1) 1
Associated Legendre 1−x2 − m2
1−x2 l (l +1) 1
Chebyshev I√
1−x2 0 n2 1p1−x2
Shifted Chebyshev Ip
x(1−x) 0 n2 1px(1−x)
Chebyshev II (1−x2)32 0 n(n +2)
√1−x2
Ultraspherical (Gegenbauer) (1−x2)α+12 0 n(n +2α) (1−x2)α−
12
Bessel x − n2
x a2 xLaguerre xe−x 0 α e−x
Associated Laguerre xk+1e−x 0 α−k xk e−x
Hermite xe−x20 2α e−x
Fourier 1 0 k2 1(harmonic oscillator)Schrödinger 1 l (l +1)x−2 µ 1(hydrogen atom)
Table 12.1: Some varieties of differentialequations expressible as Sturm-Liouvilledifferential equations
13
Separation of variables
This chapter deals with the ancient alchemic suspicion of “solve et coagula”
that it is possible to solve a problem by splitting it up into partial problems,
solving these issues separately; and consecutively joining together the par-
tial solutions, thereby yielding the full answer to the problem – translated For a counterexample see the Kochen-Specker theorem on page 86.into the context of partial differential equations; that is, equations with
derivatives of more than one variable. Thereby, solving the separate partial
problems is not dissimilar to applying subprograms from some program
library.
Already Descartes mentioned this sort of method in his Discours de la
méthode pour bien conduire sa raison et chercher la verité dans les sciences
(English translation: Discourse on the Method of Rightly Conducting One’s
Reason and of Seeking Truth) 1 stating that (in a newer translation 2) 1 Rene Descartes. Discours de la méthodepour bien conduire sa raison et chercherla verité dans les sciences (Discourse onthe Method of Rightly Conducting One’sReason and of Seeking Truth). 1637. URLhttp://www.gutenberg.org/etext/592 Rene Descartes. The Philosophical Writ-ings of Descartes. Volume 1. CambridgeUniversity Press, Cambridge, 1985. trans-lated by John Cottingham, Robert Stoothoffand Dugald Murdoch
[Rule Five:] The whole method consists entirely in the ordering and arranging
of the objects on which we must concentrate our mind’s eye if we are to dis-
cover some truth . We shall be following this method exactly if we first reduce
complicated and obscure propositions step by step to simpler ones, and then,
starting with the intuition of the simplest ones of all, try to ascend through
the same steps to a knowledge of all the rest. [Rule Thirteen:] If we perfectly
understand a problem we must abstract it from every superfluous conception,
reduce it to its simplest terms and, by means of an enumeration, divide it up
into the smallest possible parts.
The method of separation of variables is one among a couple of strate-
gies to solve differential equations 3, and it is a very important one in 3 Lawrence C. Evans. Partial differ-ential equations. Graduate Studies inMathematics, volume 19. AmericanMathematical Society, Providence, RhodeIsland, 1998; and Klaus Jänich. Analysisfür Physiker und Ingenieure. Funktionen-theorie, Differentialgleichungen, SpezielleFunktionen. Springer, Berlin, Heidel-berg, fourth edition, 2001. URL http:
//www.springer.com/mathematics/
analysis/book/978-3-540-41985-3
physics.
Separation of variables can be applied whenever we have no “mixtures
of derivatives and functional dependencies;” more specifically, whenever
the partial differential equation can be written as a sum
Lx,yψ(x, y) = (Lx +Ly )ψ(x, y) = 0, or
Lxψ(x, y) =−Lyψ(x, y).(13.1)
Because in this case we may make a multiplicative Ansatz
ψ(x, y) = v(x)u(y). (13.2)
204 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Inserting (13.2) into (13) effectively separates the variable dependencies
Lx v(x)u(y) =−Ly v(x)u(y),
u(y) [Lx v(x)] =−v(x)[Ly u(y)
],
1v(x) Lx v(x) =− 1
u(y) Ly u(y) = a,
(13.3)
with constant a, because Lx v(x)v(x) does not depend on x, and
Ly u(y)u(y) does not
depend on y . Therefore, neither side depends on x or y ; hence both sides
are constants.
As a result, we can treat and integrate both sides separately; that is,
1v(x) Lx v(x) = a,
1u(y) Ly u(y) =−a.
(13.4)
As a result, we can treat and integrate both sides separately; that is,
Lx v(x)−av(x) = 0,
Ly u(y)+au(y) = 0.(13.5)
This separation of variable Ansatz can be often used when the Laplace
operator ∆=∇·∇ is involved, since there the partial derivatives with respect
to different variables occur in different summands.
For the sake of demonstration, let us consider a few examples.
1. Let us separate the homogenuous Laplace differential equation
∆Φ= 1
u2 + v2
(∂2Φ
∂u2 + ∂2Φ
∂v2
)+ ∂2Φ
∂z2 = 0 (13.6)
in parabolic cylinder coordinates (u, v , z) with x = ( 12 (u2 − v2),uv , z
).
The separation of variables Ansatz is
Φ(u, v , z) =Φ1(u)Φ2(v)Φ3(z).
Then,1
u2 + v2
(Φ2Φ3
∂2Φ1
∂u2 +Φ1Φ3∂2Φ2
∂v2
)+Φ1Φ2
∂2Φ
∂z2 = 0
1
u2 + v2
(Φ′′
1
Φ1+ Φ
′′2
Φ2
)=−Φ
′′3
Φ3=λ= const.
λ is constant because it does neither depend on u, v [because of the
right hand sideΦ′′3 (z)/Φ3(z)], nor on z (because of the left hand side).
Furthermore,Φ′′
1
Φ1−λu2 =−Φ
′′2
Φ2+λv2 = l 2 = const.
with constant l for analoguous reasons. The three resulting differential
equations are
Φ′′1 − (λu2 + l 2)Φ1 = 0,
Φ′′2 − (λv2 − l 2)Φ2 = 0,
Φ′′3 +λΦ3 = 0.
S E PA R AT I O N O F VA R I A B L E S 205
2. Let us separate the homogenuous (i) Laplace, (ii) wave, and (iii) dif-
fusion equations, in elliptic cylinder coordinates (u, v , z) with~x =(a coshu cos v , a sinhu sin v , z) and
∆ = 1
a2(sinh2 u + sin2 v)
[∂2
∂u2 + ∂2
∂v2
]+ ∂2
∂z2 .
ad (i): Again the separation of variables Ansatz isΦ(u, v , z) =Φ1(u)Φ2(v)Φ3(z).
Hence,
1a2(sinh2 u+sin2 v)
(Φ2Φ3
∂2Φ1∂u2 +Φ1Φ3
∂2Φ2∂v2
)=−Φ1Φ2
∂2Φ∂z2 ,
1a2(sinh2 u+sin2 v)
(Φ′′
1Φ1
+ Φ′′2
Φ2
)=−Φ′′
3Φ3
= k2 = const. =⇒Φ′′3 +k2Φ3 = 0
Φ′′1
Φ1+ Φ′′
2Φ2
= k2a2(sinh2 u + sin2 v),Φ′′
1Φ1
−k2a2 sinh2 u =−Φ′′2
Φ2+k2a2 sin2 v = l 2,
(13.7)
and finally,Φ′′
1 − (k2a2 sinh2 u + l 2)Φ1 = 0,
Φ′′2 − (k2a2 sin2 v − l 2)Φ2 = 0.
ad (ii): the wave equation is given by
∆Φ= 1
c2
∂2Φ
∂t 2 .
Hence,
1
a2(sinh2 u + sin2 v)
(∂2
∂u2 + ∂2
∂v2 +)Φ+ ∂2Φ
∂z2 = 1
c2
∂2Φ
∂t 2 .
The separation of variables Ansatz isΦ(u, v , z, t ) =Φ1(u)Φ2(v)Φ3(z)T (t )
=⇒ 1a2(sinh2 u+sin2 v)
(Φ′′
1Φ1
+ Φ′′2
Φ2
)+ Φ′′
3Φ3
= 1c2
T ′′T =−ω2 = const.,
1c2
T ′′T =−ω2 =⇒ T ′′+ c2ω2T = 0,
1a2(sinh2 u+sin2 v)
(Φ′′
1Φ1
+ Φ′′2
Φ2
)=−Φ′′
3Φ3
−ω2 = k2,
Φ′′3 + (ω2 +k2)Φ3 = 0
Φ′′1
Φ1+ Φ′′
2Φ2
= k2a2(sinh2 u + sin2 v)Φ′′
1Φ1
−a2k2 sinh2 u =−Φ′′2
Φ2+a2k2 sin2 v = l 2,
(13.8)
and finally,Φ′′
1 − (k2a2 sinh2 u + l 2)Φ1 = 0,
Φ′′2 − (k2a2 sin2 v − l 2)Φ2 = 0.
ad (iii): The diffusion equation is ∆Φ= 1D∂Φ∂t .
The separation of variables Ansatz isΦ(u, v , z, t ) =Φ1(u)Φ2(v)Φ3(z)T (t ).
Let us take the result of (i), then
1a2(sinh2 u+sin2 v)
(Φ′′
1Φ1
+ Φ′′2
Φ2
)+ Φ′′
3Φ3
= 1D
T ′T =−α2 = const.
T = Ae−α2Dt
Φ′′3 + (α2 +k2)Φ3 = 0 =⇒Φ′′
3 =−(α2 +k2)Φ3 =⇒Φ3 = Be ipα2+k2 z
(13.9)
206 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
and finally,Φ′′
1 − (α2k2 sinh2 u + l 2)Φ1 = 0
Φ′′2 − (α2k2 sin2 v − l 2)Φ2 = 0
g
14
Special functions of mathematical physics
This chapter follows several good approaches 1. For reference, consider 2. 1 N. N. Lebedev. Special Functions andTheir Applications. Prentice-Hall Inc.,Englewood Cliffs, N.J., 1965. R. A. Sil-verman, translator and editor; reprintedby Dover, New York, 1972; Herbert S.Wilf. Mathematics for the physical sci-ences. Dover, New York, 1962. URLhttp://www.math.upenn.edu/~wilf/
website/Mathematics_for_the_
Physical_Sciences.html; W. W. Bell.Special Functions for Scientists and En-gineers. D. Van Nostrand Company Ltd,London, 1968; Nico M. Temme. Specialfunctions: an introduction to the classicalfunctions of mathematical physics. JohnWiley & Sons, Inc., New York, 1996. ISBN0-471-11313-1; Nico M. Temme. Numer-ical aspects of special functions. ActaNumerica, 16:379–478, 2007. ISSN 0962-4929. D O I : 10.1017/S0962492904000077.URL http://dx.doi.org/10.1017/
S0962492904000077; George E. An-drews, Richard Askey, and Ranjan Roy.Special Functions, volume 71 of Ency-clopedia of Mathematics and its Appli-cations. Cambridge University Press,Cambridge, 1999. ISBN 0-521-62321-9; Vadim Kuznetsov. Special functionsand their symmetries. Part I: Algebraicand analytic methods. PostgraduateCourse in Applied Analysis, May 2003.URL http://www1.maths.leeds.ac.
uk/~kisilv/courses/sp-funct.pdf;and Vladimir Kisil. Special functionsand their symmetries. Part II: Algebraicand symmetry methods. PostgraduateCourse in Applied Analysis, May 2003.URL http://www1.maths.leeds.ac.uk/
~kisilv/courses/sp-repr.pdf2 Milton Abramowitz and Irene A. Stegun,editors. Handbook of MathematicalFunctions with Formulas, Graphs, andMathematical Tables. Number 55 inNational Bureau of Standards AppliedMathematics Series. U.S. GovernmentPrinting Office, Washington, D.C., 1964.Corrections appeared in later printingsup to the 10th Printing, December, 1972.Reproductions by other publishers, inwhole or in part, have been available since1965; Yuri Alexandrovich Brychkov andAnatolii Platonovich Prudnikov. Handbookof special functions: derivatives, integrals,series and other formulas. CRC/Chapman& Hall Press, Boca Raton, London, NewYork, 2008; and I. S. Gradshteyn and I. M.Ryzhik. Tables of Integrals, Series, andProducts, 6th ed. Academic Press, SanDiego, CA, 2000
Special functions often arise as solutions of differential equations; for
instance as eigenfunctions of differential operators in quantum mechanics.
Sometimes they occur after several separation of variables and substitution
steps have transformed the physical problem into something manageable.
For instance, we might start out with some linear partial differential equa-
tion like the wave equation, then separate the space from time coordinates,
then separate the radial from the angular components, and finally separate
the two angular parameters. After we have done that, we end up with sev-
eral separate differential equations of the Liouville form; among them the
Legendre differential equation leading us to the Legendre polynomials.
In what follows, a particular class of special functions will be consid-
ered. These functions are all special cases of the hypergeometric function,
which is the solution of the hypergeometric equation. The hypergeometric
function exhibis a high degree of “plasticity,” as many elementary analytic
functions can be expressed by them.
First, as a prerequisite, let us define the gamma function. Then we pro-
ceed to second order Fuchsian differential equations; followed by rewriting
a Fuchsian differential equation into a hypergeometric equation. Then we
study the hypergeometric function as a solution to the hypergeometric
equation. Finally, we mention some particular hypergeometric functions,
such as the Legendre orthogonal polynomials, and others.
Again, if not mentioned otherwise, we shall restrict our attention to
second order differential equations. Sometimes – such as for the Fuchsian
class – a generalization is possible but not very relevant for physics.
14.1 Gamma function
The gamma function Γ(x) is an extension of the factorial function n!,because it generalizes the factorial to real or complex arguments (different
208 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
from the negative integers and from zero); that is,
Γ(n +1) = n! for n ∈N. (14.1)
Let us first define the shifted factorial or, by another naming, the
Pochhammer symbol
(a)0def= 1,
(a)ndef= a(a +1) · · · (a +n −1) = Γ(a +n)
Γ(a),
(14.2)
where n > 0 and a can be any real or complex number.
With this definition,
z!(z +1)n = 1 ·2 · · ·z · (z +1)((z +1)+1) · · · ((z +1)+n −1)
= 1 ·2 · · ·z · (z +1)(z +2) · · · (z +n)
= (z +n)!,
or z! = (z +n)!(z +1)n
.
(14.3)
Since
(z +n)! = (n + z)!
= 1 ·2 · · ·n · (n +1)(n +2) · · · (n + z)
= n! · (n +1)(n +2) · · · (n + z)
= n!(n +1)z ,
(14.4)
we can rewrite Eq. (14.3) into
z! = n!(n +1)z
(z +1)n= n!nz
(z +1)n
(n +1)z
nz . (14.5)
Since the latter factor, for large n, converges as [“O(x)” means “of the
order of x”]
(n +1)z
nz = (n +1)((n +1)+1) · · · ((n +1)+ z −1)
nz
= nz +O(nz−1)
nz
= nz
nz + O(nz−1)
nz
= 1+O(n−1)n→∞−→ 1,
(14.6)
in this limit, Eq. (14.5) can be written as
z! = limn→∞z! = lim
n→∞n!nz
(z +1)n. (14.7)
Hence, for all z ∈ Cwhich are not equal to a negative integer – that is,
z 6∈ −1,−2, . . . – we can, in analogy to the “classical factorial,” define a
“factorial function shifted by one” as
Γ(z +1)def= lim
n→∞n!nz
(z +1)n, (14.8)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 209
and thus, because for very large n and constant z (i.e., z ¿ n), (z +n) ≈ n,
Γ(z) = limn→∞
n!nz−1
(z)n
= limn→∞
n!nz−1
z(z +1) · · · (z +n −1)
= limn→∞
n!nz−1
z(z +1) · · · (z +n −1)
( z +n
z +n
)︸ ︷︷ ︸
1
= limn→∞
n!nz−1(z +n)
z(z +1) · · · (z +n)
= 1
zlim
n→∞n!nz
(z +1)n.
(14.9)
Γ(z + 1) has thus been redefined in terms of z! in Eq. (14.3), which, by
comparing Eqs. (14.8) and (14.9), also implies that
Γ(z +1) = zΓ(z). (14.10)
Note that, since
(1)n = 1(1+1)(1+2) · · · (1+n −1) = n!, (14.11)
Eq. (14.8) yields
Γ(1) = limn→∞
n!n0
(1)n= lim
n→∞n!n!
= 1. (14.12)
By induction, Eqs. (14.12) and (14.10) yield Γ(n +1) = n! for n ∈N.
We state without proof that, for complex numbers z with positive real
parts ℜz > 0, Γ(z) can be defined by an integral representation as
Γ(z)def=
∫ ∞
0t z−1e−t d t . (14.13)
Note that Eq. (14.10) can be derived from this integral representation of
Γ(z) by partial integration; that is,
Γ(z +1) =∫ ∞
0t z e−t d t
= −t z e−t ∣∣∞0︸ ︷︷ ︸
0
−[−
∫ ∞
0
(d
d tt z
)e−t d t
]
=∫ ∞
0zt z−1e−t d t
= z∫ ∞
0t z−1e−t d t = zΓ(z).
(14.14)
We also mention without proof following the formulæ:
Γ
(1
2
)=p
π, (14.15)
210 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
or, more generally,
Γ(n
2
)=p
π(n −2)!!2(n−1)/2
, for n > 0; and (14.16)
Euler’s reflection formula Γ(x)Γ(1−x) = π
sin(πx). (14.17)
Here, the double factorial is defined by
n!! =
1 for n =−1,0, and
2 ·4 · · · (n −2) ·n
= (2k)!! =k∏
i=1(2i )
= 2n/2(1 ·2 · · · (n −2)
2· n
2
)= k!2k for positive even n = 2k,k ≥ 1, and
1 ·3 · · · (n −2) ·n
= (2k −1)!! =k∏
i=1(2i −1)
= 1 ·2 · · · (2k −2) · (2k −1) · (2k)
(2k)!!
= (2k)!k!2k
for odd positive n = 2k −1,k ≥ 1.
(14.18)
Stirling’s formula [again, O(x) means “of the order of x”]
logn! = n logn −n +O(log(n)), or
n! n→∞−→ p2πn
(n
e
)n, or, more generally,
Γ(x) =√
2π
x
( x
e
)x(1+O
(1
x
)) (14.19)
is stated without proof.
14.2 Beta function
The beta function, also called the Euler integral of the first kind, is a special
function defined by
B(x, y) =∫ 1
0t x−1(1− t )y−1d t = Γ(x)Γ(y)
Γ(x + y)for ℜx,ℜy > 0 (14.20)
No proof of the identity of the two representations in terms of an integral,
and of Γ-functions is given.
14.3 Fuchsian differential equations
Many differential equations of theoretical physics are Fuchsian equations.
We shall therefore study this class in some generality.
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 211
14.3.1 Regular, regular singular, and irregular singular point
Consider the homogenuous differential equation [Eq. (12.1) on page 195 is
inhomogenuos]
Lx y(x) = a2(x)d 2
d x2 y(x)+a1(x)d
d xy(x)+a0(x)y(x) = 0. (14.21)
If a0(x), a1(x) and a2(x) are analytic at some point x0 and in its neighbor-
hood, and if a2(x0) 6= 0 at x0, then x0 is called an ordinary point, or regular
point. We state without proof that in this case the solutions around x0 can
be expanded as power series. In this case we can divide equation (14.21) by
a2(x) and rewrite it
1
a2(x)Lx y(x) = d 2
d x2 y(x)+p1(x)d
d xy(x)+p2(x)y(x) = 0, (14.22)
with p1(x) = a1(x)/a2(x) and p2(x) = a0(x)/a2(x).
If, however, a2(x0) = 0 and a1(x0) or a0(x0) are nonzero, then the x0 is
called singular point of (14.21). The simplest case is if a0(x) has a simple
zero at x0; then both p1(x) and p2(x) in (14.22) have at most simple poles.
Furthermore, for reasons disclosed later – mainly motivated by the
possibility to write the solutions as power series – a point x0 is called a
regular singular point of Eq. (14.21) if
limx→x0
(x −x0)a1(x)
a2(x), as well as lim
x→x0(x −x0)2 a0(x)
a2(x)(14.23)
both exist. If any one of these limits does not exist, the singular point is an
irregular singular point.
A linear ordinary differential equation is called Fuchsian, or Fuchsian
differential equation generalizable to arbitrary order n of differentiation[d n
d xn +p1(x)d n−1
d xn−1 +·· ·+pn−1(x)d
d x+pn(x)
]y(x) = 0, (14.24)
if every singular point, including infinity, is regular, meaning that pk (x) has
at most poles of order k.
A very important case is a Fuchsian of the second order (up to second
derivatives occur). In this case, we suppose that the coefficients in (14.22)
satisfy the following conditions:
• p1(x) has at most single poles, and
• p2(x) has at most double poles.
The simplest realization of this case is for a2(x) = a(x − x0)2, a1(x) =b(x −x0), a0(x) = c for some constant a,b,c ∈C.
14.3.2 Functional form of the coefficients in Fuchsian differential equa-
tions
Let us hint on the functional form of the coefficients p1(x) and p2(x),
resulting from the assumption of regular singular poles.
212 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
First, let us start with poles at finite complex numbers. Suppose there
are k finite poles (the behavior of p1(x) and p2(x) at infinity will be treated
later). Hence, in Eq. (14.22), the coefficients must be of the form
p1(x) = P1(x)∏kj=1(x −x j )
,
and p2(x) = P2(x)∏kj=1(x −x j )2
,(14.25)
where the x1, . . . , xk are k the points of the (regular singular) poles, and
P1(x) and P2(x) are analytic in the complex plane.
Second, consider possible poles at infinity. Note that, because of the
requirement that they are regular singular, p1(x)x as well as p2(x)x2 must
be analytic at x =∞, we additionally obtain the condition that
xP1(x) = xp1(x)k∏
j=1(x −x j ),
and x2P2(x) = x2p2(x)k∏
j=1(x −x j )2
(14.26)
remain bounded analytic functions even at infinity.
Recall that, because of Liouville’s theorem (mentioned on page 140),
any bounded entire function which is defined at infinity is a constant. As a
result, xP1(x) = a and x2P2(x) = b must both be constants. Therefore, p1(x)
and p2(x) must be rational functions – that is, polynomials of the form P (x)Q(x)
– of degree of at most k −1 and 2k −2, respectively.
Moreover, by using partial fraction decomposition 3 of the rational 3 Gerhard Kristensson. Equations ofFuchsian type. In Second Order DifferentialEquations, pages 29–42. Springer, NewYork, 2010. ISBN 978-1-4419-7019-0. D O I : 10.1007/978-1-4419-7020-6.URL http://dx.doi.org/10.1007/
978-1-4419-7020-6
functions in terms of their pole factors (x − x j ), we obtain the general form
of the coefficients
p1(x) =k∑
j=1
A j
x −x j,
and p2(x) =k∑
j=1
[B j
(x −x j )2 + C j
x −x j
],
(14.27)
with constant A j ,B j ,C j ∈C. The resulting Fuchsian differential equation is
called Riemann differential equation.
Although we have considered an arbitrary finite number of poles, for
reasons that are unclear to this author, in physics we are mainly concerned
with two poles (i.e., k = 2) at finite points, and one at infinity.
The hypergeometric differential equation is a Fuchsian differential equa-
tion which has at most three regular singularities, including infinity, at 0, 1,
and ∞ 4. 4 Vadim Kuznetsov. Special functionsand their symmetries. Part I: Algebraicand analytic methods. PostgraduateCourse in Applied Analysis, May 2003.URL http://www1.maths.leeds.ac.uk/
~kisilv/courses/sp-funct.pdf
14.3.3 Frobenius method by power series
Now let us get more concrete about the solution of Fuchsian equations by
power series.
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 213
In order to obtain a feeling for power series solutions of differential
equations, consider the “first order” Fuchsian equation 5 5 Ron Larson and Bruce H. Edwards.Calculus. Brooks/Cole Cengage Learning,Belmont, CA, 9th edition, 2010. ISBN978-0-547-16702-2
y ′−λy = 0. (14.28)
Make the Ansatz, also known as known as Frobenius method 6, that the 6 George B. Arfken and Hans J. Weber.Mathematical Methods for Physicists.Elsevier, Oxford, 6th edition, 2005. ISBN0-12-059876-0;0-12-088584-0
solution can be expanded into a power series of the form
y(x) =∞∑
j=0a j x j . (14.29)
Then, Eq. (14.28) can be written as(d
d x
∞∑j=0
a j x j
)−λ
∞∑j=0
a j x j = 0,
∞∑j=0
j a j x j−1 −λ∞∑
j=0a j x j = 0,
∞∑j=1
j a j x j−1 −λ∞∑
j=0a j x j = 0,
∞∑m= j−1=0
(m +1)am+1xm −λ∞∑
j=0a j x j = 0,
∞∑j=0
( j +1)a j+1x j −λ∞∑
j=0a j x j = 0,
(14.30)
and hence, by comparing the coefficients of x j , for n ≥ 0,
( j +1)a j+1 =λa j , or
a j+1 =λa j
j +1= a0
λ j+1
( j +1)!, and
a j = a0λ j
j !.
(14.31)
Therefore,
y(x) =∞∑
j=0a0λ j
j !x j = a0
∞∑j=0
(λx) j
j != a0eλx . (14.32)
In the Fuchsian case let us consider the following Frobenius Ansatz to
expand the solution as a generalized power series around a regular singular
point x0, which can be motivated by Eq. (14.25), and by the Laurent series
expansion (8.28)–(8.30) on page 133:
p1(x) = A1(x)
x −x0=
∞∑j=0
α j (x −x0) j−1 for 0 < |x −x0| < r1,
p2(x) = A2(x)
(x −x0)2 =∞∑
j=0β j (x −x0) j−2 for 0 < |x −x0| < r2,
y(x) = (x −x0)σ∞∑
l=0(x −x0)l wl =
∞∑l=0
(x −x0)l+σwl , with w0 6= 0,
(14.33)
214 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
where A1(x) = [(x − x0)a1(x)]/a2(x) and A2(x) = [(x − x0)2a0(x)]/a2(x). Eq.
(14.22) then becomes
d 2
d x2 y(x)+p1(x)d
d xy(x)+p2(x)y(x) = 0,[
d 2
d x2 +∞∑
j=0α j (x −x0) j−1 d
d x+
∞∑j=0
β j (x −x0) j−2
] ∞∑l=0
wl (x −x0)l+σ = 0,
∞∑l=0
(l +σ)(l +σ−1)wl (x −x0)l+σ−2
+[ ∞∑
l=0(l +σ)wl (x −x0)l+σ−1
] ∞∑j=0
α j (x −x0) j−1
+[ ∞∑
l=0wl (x −x0)l+σ
] ∞∑j=0
β j (x −x0) j−2 = 0,
(x −x0)σ−2∞∑
l=0(x −x0)l [(l +σ)(l +σ−1)wl
+(l +σ)wl
∞∑j=0
α j (x −x0) j +wl
∞∑j=0
β j (x −x0) j
]= 0,
(x −x0)σ−2
[ ∞∑l=0
(l +σ)(l +σ−1)wl (x −x0)l
+∞∑
l=0(l +σ)wl
∞∑j=0
α j (x −x0)l+ j +∞∑
l=0wl
∞∑j=0
β j (x −x0)l+ j
]= 0.
Next, in order to reach a common power of (x − x0), we perform an index
identification in the second and third summands (where the order of
the sums change): l = m in the first summand, as well as an index shift
l + j = m, and thus j = m − l . Since l ≥ 0 and j ≥ 0, also m = l + j cannot be
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 215
negative. Furthermore, 0 ≤ j = m − l , so that l ≤ m.
(x −x0)σ−2
[ ∞∑l=0
(l +σ)(l +σ−1)wl (x −x0)l
+∞∑
j=0
∞∑l=0
(l +σ)wlα j (x −x0)l+ j
+∞∑
j=0
∞∑l=0
wlβ j (x −x0)l+ j
]= 0,
(x −x0)σ−2[ ∞∑
m=0(m +σ)(m +σ−1)wm(x −x0)m
+∞∑
m=0
m∑l=0
(l +σ)wlαm−l (x −x0)l+m−l
+∞∑
m=0
m∑l=0
wlβm−l (x −x0)l+m−l
]= 0,
(x −x0)σ−2 ∞∑
m=0(x −x0)m [(m +σ)(m +σ−1)wm
+m∑
l=0(l +σ)wlαm−l +
m∑l=0
wlβm−l
]= 0,
(x −x0)σ−2 ∞∑
m=0(x −x0)m [(m +σ)(m +σ−1)wm
+m∑
l=0wl
((l +σ)αm−l +βm−l
)]= 0.
(14.34)
If we can divide this equation through (x − x0)σ−2 and exploit the linear
independence of the polynomials (x − x0)m , we obtain an infinite number
of equations for the infinite number of coefficients wm by requiring that all
the terms “inbetween” the [· · · ]–brackets in Eq. (14.34) vanish individually.
In particular, for m = 0 and w0 6= 0,
(0+σ)(0+σ−1)w0 +w0((0+σ)α0 +β0
)= 0
f0(σ)def= σ(σ−1)+σα0 +β0 = 0.
(14.35)
The radius of convergence of the solution will, in accordance with the
Laurent series expansion, extend to the next singularity.
Note that in Eq. (14.35) we have defined f0(σ) which we will use now.
Furthermore, for successive m, and with the definition of
fk (σ)def= αkσ+βk , (14.36)
we obtain the sequence of linear equations
w0 f0(σ) = 0
w1 f0(σ+1)+w0 f1(σ) = 0,
w2 f0(σ+2)+w1 f1(σ+1)+w0 f2(σ) = 0,
...
wn f0(σ+n)+wn−1 f1(σ+n −1)+·· ·+w0 fn(σ) = 0.
(14.37)
216 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
which can be used for an inductive determination of the coefficients wk .
Eq. (14.35) is a quadratic equation σ2 +σ(α0 −1)+β0 = 0 for the charac-
teristic exponents
σ1,2 = 1
2
[1−α0 ±
√(1−α0)2 −4β0
](14.38)
We state without proof that, if the difference of the characteristic expo-
nents
σ1 −σ2 =√
(1−α0)2 −4β0 (14.39)
is nonzero and not an integer, then the two solutions found from σ1,2
through the generalized series Ansatz (14.33) are linear independent.
Intuitively speaking, the Frobenius method “is in obvious trouble” to
find the general solution of the Fuchsian equation if the two characteristic
exponents coincide (e.g., σ1 = σ2), but it “is also in trouble” to find the
general solution if σ1 −σ2 = m ∈N; that is, if, for some positive integer m,
σ1 =σ2 +m >σ2. Because in this case, “eventually” at n = m in Eq. (14.37),
we obtain as iterative solution for the coefficient wm the term
wm =−wm−1 f1(σ2 +m −1)+·· ·+w0 fm(σ2)
f0(σ2 +m)
=−wm−1 f1(σ1 −1)+·· ·+w0 fm(σ2)
f0(σ1)︸ ︷︷ ︸=0
,(14.40)
as the greater critical exponent σ1 is a solution of Eq. (14.35) and thus
vanishes, leaving us with a vanishing denominator.
14.3.4 d’Alambert reduction of order
If σ1 =σ2 +n with n ∈Z, then we find only a single solution of the Fuchsian
equation. In order to obtain another linear independent solution we have
to employ a method based on the Wronskian 7, or the d’Alambert reduc- 7 George B. Arfken and Hans J. Weber.Mathematical Methods for Physicists.Elsevier, Oxford, 6th edition, 2005. ISBN0-12-059876-0;0-12-088584-0
tion 8, which is a general method to obtain another, linear independent
8 Gerald Teschl. Ordinary DifferentialEquations and Dynamical Systems. Grad-uate Studies in Mathematics, volume 140.American Mathematical Society, Provi-dence, Rhode Island, 2012. ISBN ISBN-10:0-8218-8328-3 / ISBN-13: 978-0-8218-8328-0. URL http://www.mat.univie.
ac.at/~gerald/ftp/book-ode/ode.pdf
solution y2(x) from an existing particular solution y1(x) by the Ansatz (no
proof is presented here)
y2(x) = y1(x)∫
xv(s)d s. (14.41)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 217
Inserting y2(x) from (14.41) into the Fuchsian equation (14.22), and using
the fact that by assumption y1(x) is a solution of it, yields
d 2
d x2 y2(x)+p1(x)d
d xy2(x)+p2(x)y2(x) = 0,
d 2
d x2 y1(x)∫
xv(s)d s +p1(x)
d
d xy1(x)
∫x
v(s)d s +p2(x)y1(x)∫
xv(s)d s = 0,
d
d x
[d
d xy1(x)
]∫x
v(s)d s + y1(x)v(x)
+p1(x)
[d
d xy1(x)
]∫x
v(s)d s +p1(x)v(x)+p2(x)y1(x)∫
xv(s)d s = 0,[
d 2
d x2 y1(x)
]∫x
v(s)d s +[
d
d xy1(x)
]v(x)+
[d
d xy1(x)
]v(x)+ y1(x)
[d
d xv(x)
]+p1(x)
[d
d xy1(x)
]∫x
v(s)d s +p1(x)y1(x)v(x)+p2(x)y1(x)∫
xv(s)d s = 0,[
d 2
d x2 y1(x)
]∫x
v(s)d s +p1(x)
[d
d xy1(x)
]∫x
v(s)d s +p2(x)y1(x)∫
xv(s)d s
+p1(x)y1(x)v(x)+[
d
d xy1(x)
]v(x)+
[d
d xy1(x)
]v(x)+ y1(x)
[d
d xv(x)
]= 0,[
d 2
d x2 y1(x)
]∫x
v(s)d s +p1(x)
[d
d xy1(x)
]∫x
v(s)d s +p2(x)y1(x)∫
xv(s)d s
+y1(x)
[d
d xv(x)
]+2
[d
d xy1(x)
]v(x)+p1(x)y1(x)v(x) = 0,[
d 2
d x2 y1(x)
]+p1(x)
[d
d xy1(x)
]+p2(x)y1(x)
︸ ︷︷ ︸
=0
∫x
v(s)d s
+y1(x)
[d
d xv(x)
]+
2
[d
d xy1(x)
]+p1(x)y1(x)
v(x) = 0,
y1(x)
[d
d xv(x)
]+
2
[d
d xy1(x)
]+p1(x)y1(x)
v(x) = 0,
and finally,
v ′(x)+ v(x)
2
y ′1(x)
y1(x)+p1(x)
= 0. (14.42)
14.3.5 Computation of the characteristic exponent
Let w ′′+p1(z)w ′+p2(z)w = 0 be a Fuchsian equation. From the Laurent
series expansion of p1(z) and p2(z) with Cauchy’s integral formula we
can derive the following equations, which are helpful in tetermining the
characteristc exponent σ:
α0 = limz→z0
(z − z0)p1(z),
β0 = limz→z0
(z − z0)2p2(z),(14.43)
where z0 is a regular singular point.
218 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Let us consider α0 and the Laurent series for
p1(z) =∞∑
k=−1ak (z − z0)k with ak = 1
2πi
∮p1(s)(s − z0)−(k+1)d s.
The summands vanish for k <−1, because p1(z) has at most a pole of order
one at z0. Let us change the index : n = k+1 (=⇒ k = n−1) and αndef= an−1;
then
p1(z) =∞∑
n=0αn(z − z0)n−1,
where
αn = an−1 = 1
2πi
∮p1(s)(s − z0)−nd s;
in particular,
α0 = 1
2πi
∮p1(s)d s.
Because the equation is Fuchsian, p1(z) has only a pole of order one at z0;
and p1(z) is of the form
p1(z) = a1(z)
(z − z0)a2(z)= (z − z0)p1(z)
(z − z0)
and
α0 = 1
2πi
∮p1(s)(s − z0)
(s − z0)d s,
where (s − z0)p1(s) is analytic around z0; hence we can apply Cauchy’s
integral formula:
α0 = lims→z0
p1(s)(s − z0)
An easy way to see this is with the Ansatz: p1(z) = ∑∞n=0αn(z − z0)n−1;
multiplication with (z − z0) yields
(z − z0)p1(z) =∞∑
n=0αn(z − z0)n .
In the limit z → z0,
limz→z0
(z − z0)p1(z) =αn
Let us consider β0 and the Laurent series for
p2(z) =∞∑
k=−2bk (z − z0)k with bk = 1
2πi
∮p2(s)(s − z0)−(k+1)d s.
The summands vanish for k < −2, because p2(z) has at most a pole of
second order at z0. Let us change the index : n = k +2 (=⇒ k = n −2) and
βndef= bn−2. Hence,
p2(z) =∞∑
n=0βn(z − z0)n−2,
where
βn = 1
2πi
∮p2(s)(s − z0)−(n−1)d s,
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 219
in particular,
β0 = 1
2πi
∮p2(s)(s − z0)d s.
Because the equation is Fuchsian, p2(z) has only a pole of the order of two
at z0; and p2(z) is of the form
p2(z) = a2(z)
(z − z0)2a2(z)= (z − z0)2p2(z)
(z − z0)2
where a2(z) = p2(z)(z − z0)2 is analytic around z0
β0 = 1
2πi
∮p2(s)(s − z0)2
(s − z0)d s;
hence we can apply Cauchy’s integral formula
β0 = lims→z0
p2(s)(s − z0)2.
An easy way to see this is with the Ansatz: p2(z) = ∑∞n=0βn(z − z0)n−2.
multiplication with (z − z0)2, in the limit z → z0, yields
limz→z0
(z − z0)2p2(z) =βn
14.3.6 Behavior at infinity
For z =∞, transform the Fuchsian equation w ′′+p1(z)w ′+p2(z)w = 0 into
the new variable t = 1z .
t = 1
z, z = 1
t, u(t )
def= w
(1
t
)= w(z)
d z
d t=− 1
t 2 , and thusd
d z=−t 2 d
d t
d 2
d z2 =−t 2 d
d t
(−t 2 d
d t
)=−t 2
(−2t
d
d t− t 2 d 2
d t 2
)= 2t 3 d
d t+ t 4 d 2
d t 2
w ′(z) = d
d zw(z) =−t 2 d
d tu(t ) =−t 2u′(t )
w ′′(z) = d 2
d z2 w(z) =(2t 3 d
d t+ t 4 d 2
d t 2
)u(t ) = 2t 3u′(t )+ t 4u′′(t )
Insertion into the Fuchsian equation w ′′+p1(z)w ′+p2(z)w = 0 yields
2t 3u′+ t 4u′′+p1
(1
t
)(−t 2u′)+p2
(1
t
)u = 0,
and hence,
u′′+[
2
t− p1
( 1t
)t 2
]u′+ p2
( 1t
)t 4 u = 0.
From
p1(t )def=
[2
t− p1
( 1t
)t 2
]
220 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
and
p2(t )def= p2
( 1t
)t 4
follows the form of the rewritten differential equation
u′′+ p1(t )u′+ p2(t )u = 0. (14.44)
This equation is Fuchsian if 0 is a ordinary point, or at least a regular singu-
lar point.
14.3.7 Examples
Let us consider some examples involving Fuchsian equations of the second
order.
1. Find out whether the following differential equations are Fuchsian, and
enumerate the regular singular points:
zw ′′+ (1− z)w ′ = 0,
z2w ′′+ zw ′−ν2w = 0,
z2(1+ z)2w ′′+2z(z +1)(z +2)w ′−4w = 0,
2z(z +2)w ′′+w ′− zw = 0.
(14.45)
ad 1: zw ′′+ (1− z)w ′ = 0 =⇒ w ′′+ (1− z)
zw ′ = 0
z = 0:
α0 = limz→0
z(1− z)
z= 1, β0 = lim
z→0z2 ·0 = 0.
The equation for the characteristic exponent is
σ(σ−1)+σα0 +β0 = 0 =⇒σ2 −σ+σ= 0 =⇒σ1,2 = 0.
z =∞: z = 1t
p1(t ) = 2
t−
(1− 1
t
)1t
t 2 = 2
t−
(1− 1
t
)t
= 1
t+ 1
t 2 = t +1
t 2
=⇒ not Fuchsian.
ad 2: z2w ′′+ zw ′− v2w = 0 =⇒ w ′′+ 1
zw ′− v2
z2 w = 0.
z = 0:
α0 = limz→0
z1
z= 1, β0 = lim
z→0z2
(−v2
z2
)=−v2.
=⇒σ2 −σ+σ− v2 = 0 =⇒σ1,2 =±v
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 221
z =∞: z = 1t
p1(t ) = 2
t− 1
t 2 t = 1
t
p2(t ) = 1
t 4
(−t 2v2)=−v2
t 2
=⇒ u′′+ 1
tu′− v2
t 2 u = 0 =⇒σ1,2 =±v
=⇒ Fuchsian equation.
ad 3:
z2(1+z)2w ′′+2z(z+1)(z+2)w ′−4w = 0 =⇒ w ′′+2(z +2)
z(z +1)w ′− 4
z2(1+ z)2 w = 0
z = 0:
α0 = limz→0
z2(z +2)
z(z +1)= 4, β0 = lim
z→0z2
(− 4
z2(1+ z)2
)=−4.
=⇒σ(σ−1)+4σ−4 =σ2 +3σ−4 = 0 =⇒σ1,2 = −3±p9+16
2=
−4
+1
z =−1:
α0 = limz→−1
(z +1)2(z +2)
z(z +1)=−2, β0 = lim
z→−1(z +1)2
(− 4
z2(1+ z)2
)=−4.
=⇒σ(σ−1)−2σ−4 =σ2 −3σ−4 = 0 =⇒σ1,2 = 3±p9+16
2=
+4
−1z =∞:
p1(t ) = 2
t− 1
t 2
2( 1
t +2)
1t
( 1t +1
) = 2
t− 2
( 1t +2
)1+ t
= 2
t
(1− 1+2t
1+ t
)
p2(t ) = 1
t 4
− 41t 2
(1+ 1
t
)2
=− 4
t 2
t 2
(t +1)2 =− 4
(t +1)2
=⇒ u′′+ 2
t
(1− 1+2t
1+ t
)u′− 4
(t +1)2 u = 0
α0 = limt→0
t2
t
(1− 1+2t
1+ t
)= 0, β0 = lim
t→0t 2
(− 4
(t +1)2
)= 0.
=⇒σ(σ−1) = 0 =⇒σ1,2 =
0
1
=⇒ Fuchsian equation.
ad 4:
2z(z +2)w ′′+w ′− zw = 0 =⇒ w ′′+ 1
2z(z +2)w ′− 1
2(z +2)w = 0
z = 0:
α0 = limz→0
z1
2z(z +2)= 1
4, β0 = lim
z→0z2 −1
2(z +2)= 0.
222 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
=⇒σ2 −σ+ 1
4σ= 0 =⇒σ2 − 3
4σ= 0 =⇒σ1 = 0,σ2 = 3
4.
z =−2:
α0 = limz→−2
(z +2)1
2z(z +2)=−1
4, β0 = lim
z→−2(z +2)2 −1
2(z +2)= 0.
=⇒σ1 = 0, σ2 = 5
4.
z =∞:
p1(t ) = 2
t− 1
t 2
(1
2 1t
( 1t +2
))= 2
t− 1
2(1+2t )
p2(t ) = 1
t 4
(−1)
2( 1
t +2) =− 1
2t 3(1+2t )
=⇒ not a Fuchsian.
2. Determine the solutions of
z2w ′′+ (3z +1)w ′+w = 0
around the regular singular points.
The singularities are at z = 0 and z =∞.
Singularities at z = 0:
p1(z) = 3z +1
z2 = a1(z)
zwith a1(z) = 3+ 1
z
p1(z) has a pole of higher order than one; hence this is no Fuchsian
equation; and z = 0 is an irregular singular point.
Singularities at z =∞:
• Transformation z = 1
t, w(z) → u(t ):
u′′(t )+[
2
t− 1
t 2 p1
(1
t
)]·u′(t )+ 1
t 4 p2
(1
t
)·u(t ) = 0.
The new coefficient functions are
p1(t ) = 2
t− 1
t 2 p1
(1
t
)= 2
t− 1
t 2 (3t + t 2) = 2
t− 3
t−1 =−1
t−1
p2(t ) = 1
t 4 p2
(1
t
)= t 2
t 4 = 1
t 2
• check whether this is a regular singular point:
p1(t ) =−1+ t
t= a1(t )
twith a1(t ) =−(1+ t ) regular
p2(t ) = 1
t 2 = a2(t )
t 2 with a2(t ) = 1 regular
a1 and a2 are regular at t = 0, hence this is a regular singular point.
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 223
• Ansatz around t = 0: the transformed equation is
u′′(t )+ p1(t )u′(t )+ p2(t )u(t ) = 0
u′′(t )−(
1
t+1
)u′(t )+ 1
t 2 u(t ) = 0
t 2u′′(t )− (t + t 2)u′(t )+u(t ) = 0
The generalized power series is
u(t ) =∞∑
n=0wn t n+σ
u′(t ) =∞∑
n=0wn(n +σ)t n+σ−1
u′′(t ) =∞∑
n=0wn(n +σ)(n +σ−1)t n+σ−2
If we insert this into the transformed differential equation we obtain
t 2∞∑
n=0wn(n +σ)(n +σ−1)t n+σ−2−
− (t + t 2)∞∑
n=0wn(n +σ)t n+σ−1 +
∞∑n=0
wn t n+σ = 0
∞∑n=0
wn(n +σ)(n +σ−1)t n+σ−∞∑
n=0wn(n +σ)t n+σ−
−∞∑
n=0wn(n +σ)t n+σ+1 +
∞∑n=0
wn t n+σ = 0
Change of index: m = n +1, n = m −1 in the third sum yields
∞∑n=0
wn
[(n +σ)(n +σ−2)+1
]t n+σ−
∞∑m=1
wm−1(m −1+σ)t m+σ = 0.
In the second sum, substitute m for n
∞∑n=0
wn
[(n +σ)(n +σ−2)+1
]t n+σ−
∞∑n=1
wn−1(n +σ−1)t n+σ = 0.
We write out explicitly the n = 0 term of the first sum
w0
[σ(σ−2)+1
]tσ+
∞∑n=1
wn
[(n +σ)(n +σ−2)+1
]t n+σ−
−∞∑
n=1wn−1(n +σ−1)t n+σ = 0.
Now we can combine the two sums
w0
[σ(σ−2)+1
]tσ+
+∞∑
n=1
wn
[(n +σ)(n +σ−2)+1
]−wn−1(n +σ−1)
t n+σ = 0.
224 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
The left hand side can only vanish for all t if the coefficients vanish;
hence
w0
[σ(σ−2)+1
]= 0, (14.46)
wn
[(n +σ)(n +σ−2)+1
]−wn−1(n +σ−1) = 0. (14.47)
ad (14.46) for w0:
σ(σ−2)+1 = 0
σ2 −2σ+1 = 0
(σ−1)2 = 0 =⇒ σ(1,2)∞ = 1
The charakteristic exponent is σ(1)∞ =σ(2)∞ = 1.
ad (14.47) for wn : For the coefficients wn we obtain the recursion
formula
wn
[(n +σ)(n +σ−2)+1
]= wn−1(n +σ−1)
=⇒ wn = n +σ−1
(n +σ)(n +σ−2)+1wn−1.
Let us insert σ= 1:
wn = n
(n +1)(n −1)+1wn−1 = n
n2 −1+1wn−1 = n
n2 wn−1 = 1
nwn−1.
We can fix w0 = 1, hence:
w0 = 1 = 1
1= 1
0!
w1 = 1
1= 1
1!
w2 = 1
1 ·2= 1
2!
w3 = 1
1 ·2 ·3= 1
3!...
wn = 1
1 ·2 ·3 · · · · ·n= 1
n!
And finally,
u1(t ) = tσ∞∑
n=0wn t n = t
∞∑n=0
t n
n!= te t .
• Notice that both characteristic exponents are equal; hence we have to
employ the d’Alambert reduction
u2(t ) = u1(t )
t∫0
v(s)d s
with
v ′(t )+ v(t )
[2
u′1(t )
u1(t )+ p1(t )
]= 0.
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 225
Insertion of u1 and p1,
u1(t ) = te t
u′1(t ) = e t (1+ t )
p1(t ) = −(
1
t+1
),
yields
v ′(t )+ v(t )
(2
e t (1+ t )
te t − 1
t−1
)= 0
v ′(t )+ v(t )
(2
(1+ t )
t− 1
t−1
)= 0
v ′(t )+ v(t )
(2
t+2− 1
t−1
)= 0
v ′(t )+ v(t )
(1
t+1
)= 0
d v
d t= −v
(1+ 1
t
)d v
v= −
(1+ 1
t
)d t
Upon integration of both sides we obtain∫d v
v= −
∫ (1+ 1
t
)d t
log v = −(t + log t ) =−t − log t
v = exp(−t − log t ) = e−t e− log t = e−t
t,
and hence an explicit form of v(t ):
v(t ) = 1
te−t .
If we insert this into the equation for u2 we obtain
u2(t ) = te t∫ t
0
1
se−s d s.
• Therefore, with t = 1z , u(t ) = w(z), the two linear independent solu-
tions around the regular singular point at z =∞ are
w1(z) = 1
zexp
(1
z
), and
w2(z) = 1
zexp
(1
z
) 1z∫
0
1
te−t d t .
(14.48)
226 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
14.4 Hypergeometric function
14.4.1 Definition
A hypergeometric series is a series
∞∑j=0
c j , (14.49)
where the quotientsc j+1
c jare rational functions (that is, the quotient of two
polynomials) of j , so that they can be factorized by
c j+1
c j= ( j +a1)( j +a2) · · · ( j +ap )
( j +b1)( j +b2) · · · ( j +bq )
(x
j +1
),
or c j+1 = c j( j +a1)( j +a2) · · · ( j +ap )
( j +b1)( j +b2) · · · ( j +bq )
(x
j +1
)= c j−1
( j −1+a1)( j −1+a2) · · · ( j −1+ap )
( j −1+b1)( j −1+b2) · · · ( j −1+bq )×
× ( j +a1)( j +a2) · · · ( j +ap )
( j +b1)( j +b2) · · · ( j +bq )
(x
j
)(x
j +1
)= c0
a1a2 · · ·ap
b1b2 · · ·bq· · · ( j −1+a1)( j −1+a2) · · · ( j −1+ap )
( j −1+b1)( j −1+b2) · · · ( j −1+bq )×
× ( j +a1)( j +a2) · · · ( j +ap )
( j +b1)( j +b2) · · · ( j +bq )
( x
1
)· · ·
(x
j
)(x
j +1
)= c0
(a1) j+1(a2) j+1 · · · (ap ) j+1
(b1) j+1(b2) j+1 · · · (bq ) j+1
(x j+1
( j +1)!
).
(14.50)
The factor j + 1 in the denominator has been chosen to define the par-
ticular factor j ! in a definition given later and below; if it does not arise
“naturally” we may just obtain it by compensating it with a factor j +1 in
the numerator. With this ratio, the hypergeometric series (14.49) can be
written i terms of shifted factorials, or, by another naming, the Pochham-
mer symbol, as
∞∑j=0
c j = c0
∞∑j=0
(a1) j (a2) j · · · (ap ) j
(b1) j (b2) j · · · (bq ) j
x j
j !
= c0p Fq
(a1, . . . , ap
b1, . . . ,bp; x
), or
= c0p Fq(a1, . . . , ap ;b1, . . . ,bp ; x
).
(14.51)
Apart from this definition via hypergeometric series, the Gauss hyperge-
ometric function, or, used synonymuously, the Gauss series
2F1
(a,b
c; x
)= 2F1 (a,b;c; x) =
∞∑j=0
(a) j (b) j
(c) j
x j
j !
= 1+ ab
cx + 1
2!a(a +1)b(b +1)
c(c +1)x2 +·· ·
(14.52)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 227
can be defined as a solution of a Fuchsian differential equation which has
at most three regular singularities at 0, 1, and ∞.
Indeed, any Fuchsian equation with finite regular singularities at x1
and x2 can be rewritten into the Riemann differential equation (14.27),
which in turn can be rewritten into the Gaussian differential equation or
hypergeometric differential equation with regular singularities at 0, 1, and
∞ 9. This can be demonstrated by rewriting any such equation of the form 9 Einar Hille. Lectures on ordinary dif-ferential equations. Addison-Wesley,Reading, Mass., 1969; Garrett Birkhoff andGian-Carlo Rota. Ordinary DifferentialEquations. John Wiley & Sons, New York,Chichester, Brisbane, Toronto, fourthedition, 1959, 1960, 1962, 1969, 1978, and1989; and Gerhard Kristensson. Equationsof Fuchsian type. In Second Order Differ-ential Equations, pages 29–42. Springer,New York, 2010. ISBN 978-1-4419-7019-0. D O I : 10.1007/978-1-4419-7020-6.URL http://dx.doi.org/10.1007/
978-1-4419-7020-6
The Bessel equation has a regular singularpoint at 0, and an irregular singular pointat infinity.
w ′′(x)+(
A1
x −x1+ A2
x −x2
)w ′(x)
+(
B1
(x −x1)2 + B2
(x −x2)2 + C1
x −x1+ C2
x −x2
)w(x) = 0
(14.53)
through transforming Eq. (14.53) into the hypergeometric equation[d 2
d x2 + (a +b +1)x − c
x(x −1)
d
d x+ ab
x(x −1)
]2F1(a,b;c; x) = 0, (14.54)
where the solution is proportional to the Gauss hypergeometric function
w(x) −→ (x −x1)σ(1)1 (x −x2)σ
(2)2 2F1(a,b;c; x), (14.55)
and the variable transform as
x −→ x = x −x1
x2 −x1, with
a =σ(1)1 +σ(1)
2 +σ(1)∞ ,
b =σ(1)1 +σ(1)
2 +σ(2)∞ ,
c = 1+σ(1)1 −σ(2)
1 .
(14.56)
where σ(i )j stands for the i th characteristic exponent of the j th singularity.
Whereas the full transformation from Eq. (14.53) to the hypergeometric
equation (14.54) will not been given, we shall show that the Gauss hyperge-
ometric function 2F1 satisfies the hypergeometric equation (14.54).
First, define the differential operator
ϑ= xd
d x, (14.57)
and observe that
ϑ(ϑ+ c −1)xn = xd
d x
(x
d
d x+ c −1
)xn
= xd
d x
(xnxn−1 + cxn −xn)
= xd
d x
(nxn + cxn −xn)
= xd
d x(n + c −1) xn
= n (n + c −1) xn .
(14.58)
228 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Thus, if we apply ϑ(ϑ+ c −1) to 2F1, then
ϑ(ϑ+ c −1) 2F1(a,b;c; x) =ϑ(ϑ+ c −1)∞∑
j=0
(a) j (b) j
(c) j
x j
j !
=∞∑
j=0
(a) j (b) j
(c) j
j ( j + c −1)x j
j !=
∞∑j=1
(a) j (b) j
(c) j
j ( j + c −1)x j
j !
=∞∑
j=1
(a) j (b) j
(c) j
( j + c −1)x j
( j −1)!
[index shift: j → n +1,n = j −1,n ≥ 0]
=∞∑
n=0
(a)n+1(b)n+1
(c)n+1
(n +1+ c −1)xn+1
n!
= x∞∑
n=0
(a)n(a +n)(b)n(b +n)
(c)n(c +n)
(n + c)xn
n!
= x∞∑
n=0
(a)n(b)n
(c)n
(a +n)(b +n)xn
n!
= x(ϑ+a)(ϑ+b)∞∑
n=0
(a)n(b)n
(c)n
xn
n!= x(ϑ+a)(ϑ+b) 2F1(a,b;c; x),
(14.59)
where we have used
(a +n)xn = (a +ϑ)xn , and
(a)n+1 = a(a +1) · · · (a +n −1)(a +n) = (a)n(a +n).(14.60)
Writing out ϑ in Eq. (14.59) explicitly yields
ϑ(ϑ+ c −1)−x(ϑ+a)(ϑ+b) 2F1(a,b;c; x) = 0,x
d
d x
(x
d
d x+ c −1
)−x
(x
d
d x+a
)(x
d
d x+b
)2F1(a,b;c; x) = 0,
d
d x
(x
d
d x+ c −1
)−
(x
d
d x+a
)(x
d
d x+b
)2F1(a,b;c; x) = 0,
d
d x+x
d 2
d x2 + (c −1)d
d x−
(x2 d 2
d x2 +xd
d x+bx
d
d x
+axd
d x+ab
)2F1(a,b;c; x) = 0,(
x −x2) d 2
d x2 + (1+ c −1−x −x(a +b))d
d x+ab
2F1(a,b;c; x) = 0,
−x(x −1)d 2
d x2 − (c −x(1+a +b))d
d x−ab
2F1(a,b;c; x) = 0,
d 2
d x2 + x(1+a +b)− c
x(x −1)
d
d x+ ab
x(x −1)
2F1(a,b;c; x) = 0.
(14.61)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 229
14.4.2 Properties
There exist many properties of the hypergeometric series. In the following
we shall mention a few.
d
d z2F1(a,b;c; z) = ab
c2F1(a +1,b +1;c +1; z). (14.62)
d
d z2F1(a,b;c; z) = d
d z
∞∑n=0
(a)n(b)n
(c)n
zn
n!=
=∞∑
n=0
(a)n(b)n
(c)nn
zn−1
n!
=∞∑
n=1
(a)n(b)n
(c)n
zn−1
(n −1)!
An index shift n → m +1, m = n −1, and a subsequent renaming m → n,
yieldsd
d z2F1(a,b;c; z) =
∞∑n=0
(a)n+1(b)n+1
(c)n+1
zn
n!.
As
(x)n+1 = x(x +1)(x +2) · · · (x +n −1)(x +n)
(x +1)n = (x +1)(x +2) · · · (x +n −1)(x +n)
(x)n+1 = x(x +1)n
holds, we obtain
d
d z2F1(a,b;c; z) =
∞∑n=0
ab
c
(a +1)n(b +1)n
(c +1)n
zn
n!= ab
c2F1(a +1,b +1;c +1; z).
We state Euler’s integral representation for ℜc > 0 and ℜb > 0 without
proof:
2F1(a,b;c; x) = Γ(c)
Γ(b)Γ(c −b)
∫ 1
0t b−1(1− t )c−b−1(1−xt )−ad t . (14.63)
For ℜ(c −a −b) > 0, we also state Gauss’ theorem
2F1(a,b;c;1) =∞∑
j=0
(a) j (b) j
j !(c) j= Γ(c)Γ(c −a −b)
Γ(c −a)Γ(c −b). (14.64)
For a proof, we can set x = 1 in Euler’s integral representation, and the
Beta function defined in Eq. (14.20).
14.4.3 Plasticity
Some of the most important elementary functions can be expressed as
hypergeometric series; most importantly the Gaussian one 2F1, which is
230 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
sometimes denoted by just F . Let us enumerate a few.
ex = 0F0(−;−; x) (14.65)
cos x = 0F1(−;1
2;−x2
4) (14.66)
sin x = x 0F1(−;3
2;−x2
4) (14.67)
(1−x)−a = 1F0(a;−; x) (14.68)
sin−1 x = x 2F1(1
2,
1
2;
3
2; x2) (14.69)
tan−1 x = x 2F1(1
2,1;
3
2;−x2) (14.70)
log(1+x) = x 2F1(1,1;2;−x) (14.71)
H2n(x) = (−1)n(2n)!n! 1F1(−n;
1
2; x2) (14.72)
H2n+1(x) = 2x(−1)n(2n +1)!
n! 1F1(−n;3
2; x2) (14.73)
Lαn (x) =(
n +αn
)1F1(−n;α+1; x) (14.74)
Pn(x) = P (0,0)n (x) = 2F1(−n,n +1;1;
1−x
2), (14.75)
Cγn (x) = (2γ)n(
γ+ 12
)n
P(γ− 1
2 ,γ− 12 )
n (x), (14.76)
Tn(x) = n!( 12
)n
P(− 1
2 ,− 12 )
n (x), (14.77)
Jα(x) =( x
2
)αΓ(α+1)
0F1(−;α+1;−1
4x2), (14.78)
where H stands for Hermite polynomials, L for Laguerre polynomials,
P (α,β)n (x) = (α+1)n
n! 2F1(−n,n +α+β+1;α+1;1−x
2) (14.79)
for Jacobi polynomials, C for Gegenbauer polynomials, T for Chebyshev
polynomials, P for Legendre polynomials, and J for the Bessel functions of
the first kind, respectively.
1. Let us prove that
log(1− z) =−z 2F1(1,1,2; z).
Consider
2F1(1,1,2; z) =∞∑
m=0
[(1)m]2
(2)m
zm
m!=
∞∑m=0
[1 ·2 · · · · ·m]2
2 · (2+1) · · · · · (2+m −1)
zm
m!
With
(1)m = 1 ·2 · · · · ·m = m!, (2)m = 2 · (2+1) · · · · · (2+m −1) = (m +1)!
follows
2F1(1,1,2; z) =∞∑
m=0
[m!]2
(m +1)!zm
m!=
∞∑m=0
zm
m +1.
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 231
Index shift k = m +1
2F1(1,1,2; z) =∞∑
k=1
zk−1
k
and hence
−z 2F1(1,1,2; z) =−∞∑
k=1
zk
k.
Compare with the series
log(1+x) =∞∑
k=1(−1)k+1 xk
kfor −1 < x ≤ 1
If one substitutes −x for x, then
log(1−x) =−∞∑
k=1
xk
k.
The identity follows from the analytic continuation of x to the complex
z plane.
2. Let us prove that, because of (a + z)n =∑nk=0
(n
k
)zk an−k ,
(1− z)n = 2F1(−n,1,1; z).
2F1(−n,1,1; z) =∞∑
i=0
(−n)i (1)i
(1)i
zi
i !=
∞∑i=0
(−n)izi
i !.
Consider (−n)i
(−n)i = (−n)(−n +1) · · · (−n + i −1).
For evenn ≥ 0 the series stops after a finite number of terms, because
the factor −n+ i −1 = 0 for i = n+1 vanishes; hence the sum of i extends
only from 0 to n. Hence, if we collect the factors (−1) which yield (−1)i
we obtain
(−n)i = (−1)i n(n −1) · · · [n − (i −1)] = (−1)i n!(n − i )!
.
Hence, insertion into the Gauss hypergeometric function yields
2F1(−n,1,1; z) =n∑
i=0(−1)i zi n!
i !(n − i )!=
n∑i=0
(n
i
)(−z)i .
This is the binomial series
(1+x)n =n∑
k=0
(n
k
)xk
with x =−z; and hence,
2F1(−n,1,1; z) = (1− z)n .
232 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
3. Let us prove that, because of arcsin x =∑∞k=0
(2k)!x2k+1
22k (k!)2(2k+1),
2F1
(1
2,
1
2,
3
2;sin2 z
)= z
sin z.
Consider
2F1
(1
2,
1
2,
3
2;sin2 z
)=
∞∑m=0
[( 12
)m
]2( 32
)m
(sin z)2m
m!.
We take
(2n)!! = 2 ·4 · · · · · (2n) = n!2n
(2n −1)!! = 1 ·3 · · · · · (2n −1) = (2n)!2nn!
Hence(1
2
)m
= 1
2·(
1
2+1
)· · ·
(1
2+m −1
)= 1 ·3 ·5 · · · (2m −1)
2m = (2m −1)!!2m(
3
2
)m
= 3
2·(
3
2+1
)· · ·
(3
2+m −1
)= 3 ·5 ·7 · · · (2m +1)
2m = (2m +1)!!2m
Therefore, ( 12
)m( 3
2
)m
= 1
2m +1.
On the other hand,
(2m)! = 1 ·2 ·3 · · · · · (2m −1)(2m) = (2m −1)!!(2m)!! == 1 ·3 ·5 · · · · · (2m −1) ·2 ·4 ·6 · · · · · (2m) ==
(1
2
)m
2m ·2mm! = 22mm!(
1
2
)m=⇒
(1
2
)m= (2m)!
22mm!
Upon insertion one obtains
F
(1
2,
1
2,
3
2;sin2 z
)=
∞∑m=0
(2m)!(sin z)2m
22m(m!)2(2m +1).
Comparing with the series for arcsin one finally obtains
sin zF
(1
2,
1
2,
3
2;sin2 z
)= arcsin(sin z) = z.
14.4.4 Four forms
We state without proof the four forms of Gauss’ hypergeometric function10. 10 T. M. MacRobert. Spherical Harmonics.
An Elementary Treatise on HarmonicFunctions with Applications, volume 98 ofInternational Series of Monographs in Pureand Applied Mathematics. Pergamon Press,Oxford, 3rd edition, 1967
F (a,b;c; x) = (1−x)c−a−bF (c −a,c −b;c; x) (14.80)
= (1−x)−aF(a,c −b;c;
x
x −1
)(14.81)
= (1−x)−bF(b,c −a;c;
x
x −1
). (14.82)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 233
14.5 Orthogonal polynomials
Many systems or sequences of functions may serve as a basis of linearly in-
dependent functions which are capable to “cover” – that is, to approximate
– certain functional classes 11. We have already encountered at least two 11 Russell Herman. A Second Course in Or-dinary Differential Equations: DynamicalSystems and Boundary Value Problems.University of North Carolina Wilming-ton, Wilmington, NC, 2008. URL http:
//people.uncw.edu/hermanr/mat463/
ODEBook/Book/ODE_LargeFont.pdf.Creative Commons Attribution-NoncommercialShare Alike 3.0 UnitedStates License; and Francisco Marcellánand Walter Van Assche. Orthogonal Polyno-mials and Special Functions, volume 1883of Lecture Notes in Mathematics. Springer,Berlin, 2006. ISBN 3-540-31062-2
such prospective bases [cf. Eq. (9.12)]:
1, x, x2, . . . , xk , . . . with f (x) =∞∑
k=0ck xk , (14.83)
and e i kx | k ∈Z
for f (x +2π) = f (x)
with f (x) =∞∑
k=−∞ck e i kx ,
where ck = 1
2π
∫ π
−πf (x)e−i kx d x.
(14.84)
In order to claim existence of such functional basis systems, let us first
define what orthogonality means in the functional context. Just as for
linear vector spaces, we can define an inner product or scalar product [cf.
also Eq. (9.4)] of two real-valued functions f (x) and g (x) by the integral 12 12 Herbert S. Wilf. Mathematics for thephysical sciences. Dover, New York, 1962.URL http://www.math.upenn.edu/
~wilf/website/Mathematics_for_the_
Physical_Sciences.html
⟨ f | g ⟩ =∫ b
af (x)g (x)ρ(x)d x (14.85)
for some suitable weight function ρ(x) ≥ 0. Very often, the weight function
is set to unity; that is, ρ(x) = ρ = 1. We notice without proof that ⟨ f |g ⟩ satisfies all requirements of a scalar product. A system of functions
ψ0,ψ1,ψ2, . . . ,ψk , . . . is orthogonal if, for j 6= k,
⟨ψ j |ψk⟩ =∫ b
aψ j (x)ψk (x)ρ(x)d x = 0. (14.86)
Suppose, in some generality, that f0, f1, f2, . . . , fk , . . . is a sequence of
nonorthogonal functions. Then we can apply a Gram-Schmidt orthogonal-
ization process to these functions and thereby obtain orthogonal functions
φ0,φ1,φ2, . . . ,φk , . . . by
φ0(x) = f0(x),
φk (x) = fk (x)−k−1∑j=0
⟨ fk |φ j ⟩⟨φ j |φ j ⟩
φ j (x).(14.87)
Note that the proof of the Gram-Schmidt process in the functional context
is analoguous to the one in the vector context.
14.6 Legendre polynomials
The system of polynomial functions 1, x, x2, . . . , xk , . . . is such a non or-
thogonal sequence in this sense, as, for instance, with ρ = 1 and b =−a = 1,
234 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
⟨1 | x2⟩ =∫ b=1
a=−1x2d x = x3
3
∣∣∣∣x=1
x=−1= 2
3. (14.88)
Hence, by the Gram-Schmidt process we obtain
φ0(x) = 1,
φ1(x) = x − ⟨x | 1⟩⟨1 | 1⟩ 1
= x −0 = x,
φ2(x) = x2 − ⟨x2 | 1⟩⟨1 | 1⟩ 1− ⟨x2 | x⟩
⟨x | x⟩ x
= x2 − 2/3
21−0x = x2 − 1
3,
...
(14.89)
If, on top of orthogonality, we are “forcing” a type of “normalization” by
defining
Pl (x)def= φl (x)
φl (1),
with Pl (1) = 1,
(14.90)
then the resulting orthogonal polynomials are the Legendre polynomials Pl ;
in particular,
P0(x) = 1,
P1(x) = x,
P2(x) =(
x2 − 1
3
)/
2
3= 1
2
(3x2 −1
),
...
(14.91)
with Pl (1) = 1, l =N0.
Why should we be interested in orthonormal systems of functions?
Because, as pointed out earlier, they could be the eigenfunctions and solu-
tions of certain differential equation, such as, for instance, the Schrödinger
equation, which may be subjected to a separation of variables. For Leg-
endre polynomials the associated differential equation is the Legendre
equation
(x2 −1)[Pl (x)]′′+2x[Pl (x)]′ = l (l +1)Pl (x), for l ∈N0 (14.92)
whose Sturm-Liouville form has been mentioned earlier in Table 12.1 on
page 202. For a proof, we refer to the literature.
14.6.1 Rodrigues formula
We just state the Rodrigues formula for Legendre polynomials
Pl (x) = 1
2l l !d l
d xl(x2 −1)l , for l ∈N0. (14.93)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 235
without proof.
For even l , Pl (x) = Pl (−x) is an even function of x, whereas for odd l ,
Pl (x) =−Pl (−x) is an odd function of x; that is,
Pl (−x) = (−1)l Pl (x). (14.94)
Moreover,
Pl (−1) = (−1)l (14.95)
and
P2k+1(0) = 0. (14.96)
This can be shown by the substitution t = −x, d t = −d x, and insertion
into the Rodrigues formula:
Pl (−x) = 1
2l l !d l
dul(u2 −1)l
∣∣∣∣∣u=−x
= [u →−u] =
= 1
(−1)l
1
2l l !d l
dul(u2 −1)l
∣∣∣∣∣u=x
= (−1)l Pl (x).
Because of the “normalization” Pl (1) = 1 we obtain
Pl (−1) = (−1)l Pl (1) = (−1)l .
And as Pl (−0) = Pl (0) = (−1)l Pl (0), we obtain Pl (0) = 0 for odd l .
14.6.2 Generating function
For |x| < 1 and |1| < 1 the Legendre polynomials have the following generat-
ing function
g (x, t ) = 1p1−2xt + t 2
=∞∑
l=0t l Pl (x). (14.97)
No proof is given here.
14.6.3 The three term and other recursion formulae
Among other things, generating functions are useful for the derivation of
certain recursion relations involving Legendre polynomials.
For instance, for l = 1,2, . . ., the three term recursion formula
(2l +1)xPl (x) = (l +1)Pl+1(x)+ l Pl−1(x), (14.98)
or, by substituting l −1 for l , for l = 2,3. . .,
(2l −1)xPl−1(x) = lPl (x)+ (l −1)Pl−2(x), (14.99)
can be proven as follows.
g (x, t ) = 1p1−2t x + t 2
=∞∑
n=0t nPn(x)
236 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
∂
∂tg (x, t ) =−1
2(1−2t x + t 2)−
32 (−2x +2t ) = 1p
1−2t x + t 2
x − t
1−2t x + t 2
∂
∂tg (x, t ) = x − t
1−2t x + t 2
∞∑n=0
t nPn(x) =∞∑
n=0nt n−1Pn(x)
(x − t )∞∑
n=0t nPn(x)− (1−2t x + t 2)
∞∑n=0
nt n−1Pn(x) = 0
∞∑n=0
xt nPn(x)−∞∑
n=0t n+1Pn(x)−
∞∑n=1
nt n−1Pn(x)+
+∞∑
n=02xnt nPn(x)−
∞∑n=0
nt n+1Pn(x) = 0
∞∑n=0
(2n +1)xt nPn(x)−∞∑
n=0(n +1)t n+1Pn(x)−
∞∑n=1
nt n−1Pn(x) = 0
∞∑n=0
(2n +1)xt nPn(x)−∞∑
n=1nt nPn−1(x)−
∞∑n=0
(n +1)t nPn+1(x) = 0,
xP0(x)−P1(x)+∞∑
n=1t n
[(2n +1)xPn(x)−nPn−1(x)− (n +1)Pn+1(x)
]= 0,
hence
xP0(x)−P1(x) = 0, (2n +1)xPn(x)−nPn−1(x)− (n +1)Pn+1(x) = 0,
hence
P1(x) = xP0(x), (n +1)Pn+1(x) = (2n +1)xPn(x)−nPn−1(x).
Let us prove
Pl−1(x) = P ′l (x)−2xP ′
l−1(x)+P ′l−2(x). (14.100)
g (x, t ) = 1p1−2t x + t 2
=∞∑
n=0t nPn(x)
∂
∂xg (x, t ) =−1
2(1−2t x + t 2)−
32 (−2t ) = 1p
1−2t x + t 2
t
1−2t x + t 2
∂
∂xg (x, t ) = t
1−2t x + t 2
∞∑n=0
t nPn(x) =∞∑
n=0t nP ′
n(x)
∞∑n=0
t n+1Pn(x) =∞∑
n=0t nP ′
n(x)−∞∑
n=02xt n+1P ′
n(x)+∞∑
n=0t n+2P ′
n(x)
∞∑n=1
t nPn−1(x) =∞∑
n=0t nP ′
n(x)−∞∑
n=12xt nP ′
n−1(x)+∞∑
n=2t nP ′
n−2(x)
tP0 +∞∑
n=2t nPn−1(x) = P ′
0(x)+ tP ′1(x)+
∞∑n=2
t nP ′n(x)−
−2xtP ′0−
∞∑n=2
2xt nP ′n−1(x)+
∞∑n=2
t nP ′n−2(x)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 237
P ′0(x)+ t
[P ′
1(x)−P0(x)−2xP ′0(x)
]+
+∞∑
n=2t n[P ′
n(x)−2xP ′n−1(x)+P ′
n−2(x)−Pn−1(x)] = 0
P ′0(x) = 0, hence P0(x) = const.
P ′1(x)−P0(x)−2xP ′
0(x) = 0.
Because of P ′0(x) = 0 we obtain P ′
1(x)−P0(x) = 0, hence P ′1(x) = P0(x), and
P ′n(x)−2xP ′
n−1(x)+P ′n−2(x)−Pn−1(x) = 0.
Finally we substitute n +1 for n:
P ′n+1(x)−2xP ′
n(x)+P ′n−1(x)−Pn(x) = 0,
hence
Pn(x) = P ′n+1(x)−2xP ′
n(x)+P ′n−1(x).
Let us prove
P ′l+1(x)−P ′
l−1(x) = (2l +1)Pl (x). (14.101)
(n +1)Pn+1(x) = (2n +1)xPn(x)−nPn−1(x)
∣∣∣∣ d
d x
(n +1)P ′n+1(x) = (2n +1)Pn(x)+ (2n +1)xP ′
n(x)−nP ′n−1(x)
∣∣∣·2(i): (2n +2)P ′
n+1(x) = 2(2n +1)Pn(x)+2(2n +1)xP ′n(x)−2nP ′
n−1(x)
P ′n+1(x)−2xP ′
n(x)+P ′n−1(x) = Pn(x)
∣∣∣· (2n +1)
(ii): (2n +1)P ′n+1(x)−2(2n +1)xP ′
n(x)+ (2n +1)P ′n−1(x) = (2n +1)Pn(x)
We subtract (ii) from (i):
P ′n+1(x)+2(2n +1)xP ′
n(x)− (2n +1)P ′n−1(x) =
= (2n+1)Pn(x)+2(2n+1)xP ′n(x)−2nP ′
n−1(x);
hence
P ′n+1(x)−P ′
n−1(x) = (2n +1)Pn(x).
14.6.4 Expansion in Legendre polynomials
We state without proof that square integrable functions f (x) can be written
as series of Legendre polynomials as
f (x) =∞∑
l=0al Pl (x),
with expansion coefficients al =2l +1
2
+1∫−1
f (x)Pl (x)d x.
(14.102)
238 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Let us expand the Heaviside function defined in Eq. (10.106)
H(x) =
1 for x ≥ 0
0 for x < 0(14.103)
in terms of Legendre polynomials.
We shall use the recursion formula (2l +1)Pl = P ′l+1 −P ′
l−1 and rewrite
al = 1
2
1∫0
(P ′
l+1(x)−P ′l−1(x)
)d x = 1
2
(Pl+1(x)−Pl−1(x)
)∣∣∣1
x=0=
= 1
2
[Pl+1(1)−Pl−1(1)
]︸ ︷︷ ︸= 0 because of
“normalization”
−1
2
[Pl+1(0)−Pl−1(0)
].
Note that Pn(0) = 0 for odd n; hence al = 0 for even l 6= 0. We shall treat
the case l = 0 with P0(x) = 1 separately. Upon substituting 2l +1 for l one
obtains
a2l+1 =−1
2
[P2l+2(0)−P2l (0)
].
We shall next use the formula
Pl (0) = (−1)l2
l !
2l((
l2
)!)2 ,
and for even l ≥ 0 one obtains
a2l+1 = −1
2
[(−1)l+1(2l +2)!22l+2((l +1)!)2
− (−1)l (2l )!22l (l !)2
]=
= (−1)l (2l )!22l+1(l !)2
[(2l +1)(2l +2)
22(l +1)2 +1
]=
= (−1)l (2l )!22l+1(l !)2
[2(2l +1)(l +1)
22(l +1)2 +1
]=
= (−1)l (2l )!22l+1(l !)2
[2l +1+2l +2
2(l +1)
]=
= (−1)l (2l )!22l+1(l !)2
[4l +3
2(l +1)
]=
= (−1)l (2l )!(4l +3)
22l+2l !(l +1)!
a0 = 1
2
+1∫−1
H(x)P0(x)︸ ︷︷ ︸= 1
d x = 1
2
1∫0
d x = 1
2;
and finally
H(x) = 1
2+
∞∑l=0
(−1)l (2l )!(4l +3)
22l+2l !(l +1)!P2l+1(x).
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 239
14.7 Associated Legendre polynomial
Associated Legendre polynomials P ml (x) are the solutions of the general
Legendre equation
(1−x2)
d 2
d x2 −2xd
d x+
[l (l +1)− m2
1−x2
]P m
l (x) = 0,
or
[d
d x
((1−x2)
d
d x
)+ l (l +1)− m2
1−x2
]P m
l (x) = 0
(14.104)
Eq. (14.104) reduces to the Legendre equation (14.92) on page 234 for
m = 0; hence
P 0l (x) = Pl (x). (14.105)
More generally, by differentiating m times the Legendre equation (14.92) it
can be shown that
P ml (x) = (−1)m(1−x2)
m2
d m
d xm Pl (x). (14.106)
By inserting Pl (x) from the Rodrigues formula for Legendre polynomials
(14.93) we obtain
P ml (x) = (−1)m(1−x2)
m2
d m
d xm
1
2l l !d l
d xl(x2 −1)l
= (−1)m(1−x2)m2
2l l !d m+l
d xm+l(x2 −1)l .
(14.107)
In terms of the Gauss hypergeometric function the associated Legen-
dre polynomials can be generalied to arbitrary complex indices µ, λ and
argument x by
Pµ
λ(x) = 1
Γ(1−µ)
(1+x
1−x
) µ2
2F1
(−λ,λ+1;1−µ;
1−x
2
). (14.108)
No proof is given here.
14.8 Spherical harmonics
Let us define the spherical harmonics Y ml (θ,ϕ) by
Y ml (θ,ϕ) =
√(2l +1)(l −m)!
4π(l +m)!P m
l (cosθ)e i mϕ for − l ≤ m ≤ l .. (14.109)
Spherical harmonics are solutions of the differential equation
[∆+ l (l +1)]Y ml (θ,ϕ) = 0. (14.110)
This equation is what typically remains after separation and “removal” of
the radial part of the Laplace equation ∆ψ(r ,θ,ϕ) = 0 in three dimensions
when the problem is invariant (symmetric) under rotations. Twice continuously differentiable,complex-valued solutions u of the Laplaceequation ∆u = 0 are called harmonicfunctions
Sheldon Axler, Paul Bourdon, and WadeRamey. Harmonic Function Theory, volume137 of Graduate texts in mathematics.second edition, 1994. ISBN 0-387-97875-5
240 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
14.9 Solution of the Schrödinger equation for a hydrogen atom
Suppose Schrödinger, in his 1926 annus mirabilis which seems to have
been initiated by a trip to Arosa with ‘an old girlfriend from Vienna’ (ap-
parently it was neither his wife Anny who remained in Zurich, nor Lotte,
nor Irene nor Felicie 13), came down from the mountains or from whatever 13 Walter Moore. Schrödinger life andthought. Cambridge University Press,Cambridge, UK, 1989
realm he was in – and handed you over some partial differential equation
for the hydrogen atom – an equation (note that the quantum mechanical
“momentum operator” P is identified with −i —h∇)
1
2µP 2ψ= 1
2µ
(P 2
x +P 2y +P 2
z
)ψ= (E −V )ψ,
or, with V =− e2
4πε0r,
−[
—h2
2µ∆+ e2
4πε0r
]ψ(x) = Eψ,
or
[∆+ 2µ
—h2
(e2
4πε0r+E
)]ψ(x) = 0,
(14.111)
which would later bear his name – and asked you if you could be so kind
to please solve it for him. Actually, by Schrödinger’s own account 14 he 14 Erwin Schrödinger. Quantisierung alsEigenwertproblem. Annalen der Physik,384(4):361–376, 1926. ISSN 1521-3889.D O I : 10.1002/andp.19263840404. URLhttp://dx.doi.org/10.1002/andp.
19263840404
handed over this eigenwert equation to Hermann Klaus Hugo Weyl; in
this instance he was not dissimilar from Einstein, who seemed to have
employed a (human) computator on a very regular basis. Schrödinger
might also have hinted that µ, e, and ε0 stand for some (reduced) mass,
charge, and the permittivity of the vacuum, respectively, —h is a constant
of (the dimension of) action, and E is some eigenvalue which must be
determined from the solution of (14.111).
So, what could you do? First, observe that the problem is spherical
symmetric, as the potential just depends on the radius r =px ·x, and also
the Laplace operator ∆ = ∇ ·∇ allows spherical symmetry. Thus we could
write the Schrödinger equation (14.111) in terms of spherical coordinates
(r ,θ,ϕ) with x = r sinθcosϕ, y = r sinθ sinϕ, z = r cosθ, whereby θ is the
polar angle in the x–z-plane measured from the z-axis, with 0 ≤ θ ≤ π, and
ϕ is the azimuthal angle in the x–y-plane, measured from the x-axis with
0 ≤ ϕ < 2π (cf page 269). In terms of spherical coordinates the Laplace
operator essentially “decays into” (i.e. consists addiditively of) a radial part
and an angular part
∆=(∂
∂x
)2
+(∂
∂y
)2
+(∂
∂z
)2
= 1
r 2
[∂
∂r
(r 2 ∂
∂r
)+ 1
sinθ
∂
∂θsinθ
∂
∂θ+ 1
sin2θ
∂2
∂ϕ2
].
(14.112)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 241
14.9.1 Separation of variables Ansatz
This can be exploited for a separation of variable Ansatz, which, according
to Schrödinger, should be well known (in German sattsam bekannt) by now
(cf chapter 13). We thus write the solution ψ as a product of functions of
separate variables
ψ(r ,θ,ϕ) = R(r )Y ml (θ,ϕ) = R(r )Θ(θ)Φ(ϕ) (14.113)
The spherical harmonics Y ml (θ,ϕ) has been written already as a reminder
of what has been mentioned earlier on page 239. We will come back to it
later.
14.9.2 Separation of the radial part from the angular one
For the time being, let us first concentrate on the radial part R(r ). Let us
first totally separate the variables of the Schrödinger equation (14.111) in
radial coordinates 1
r 2
[∂
∂r
(r 2 ∂
∂r
)+ 1
sinθ
∂
∂θsinθ
∂
∂θ+ 1
sin2θ
∂2
∂ϕ2
]+ 2µ
—h2
(e2
4πε0r+E
)ψ(r ,θ,ϕ) = 0,
(14.114)
and multiplying it with r 2 ∂
∂r
(r 2 ∂
∂r
)+ 2µr 2
—h2
(e2
4πε0r+E
)+ 1
sinθ
∂
∂θsinθ
∂
∂θ+ 1
sin2θ
∂2
∂ϕ2
ψ(r ,θ,ϕ) = 0,
(14.115)
so that, after division by ψ(r ,θ,ϕ) = R(r )Y ml (θ,ϕ) and writing separate
variables on separate sides of the equation,
1
R(r )
∂
∂r
(r 2 ∂
∂r
)+ 2µr 2
—h2
(e2
4πε0r+E
)R(r )
=− 1
Y ml (θ,ϕ)
1
sinθ
∂
∂θsinθ
∂
∂θ+ 1
sin2θ
∂2
∂ϕ2
Y m
l (θ,ϕ)
(14.116)
Because the left hand side of this equation is independent of the angular
variables θ and ϕ, and its right hand side is independent of the radius
r , both sides have to be constant; say λ. Thus we obtain two differential
equations for the radial and the angular part, respectively∂
∂rr 2 ∂
∂r+ 2µr 2
—h2
(e2
4πε0r+E
)R(r ) =λR(r ), (14.117)
and 1
sinθ
∂
∂θsinθ
∂
∂θ+ 1
sin2θ
∂2
∂ϕ2
Y m
l (θ,ϕ) =−λY ml (θ,ϕ). (14.118)
242 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
14.9.3 Separation of the polar angle θ from the azimuthal angle ϕ
As already hinted in Eq. (14.113) The angular portion can still be sepa-
rated by the Ansatz Y ml (θ,ϕ) = Θ(θ)Φ(ϕ), because, when multiplied by
sin2θ/[Θ(θ)Φ(ϕ)], Eq. (14.118) can be rewritten assinθ
Θ(θ)
∂
∂θsinθ
∂Θ(θ)
∂θ+λsin2θ
+ 1
Φ(ϕ)
∂2Φ(ϕ)
∂ϕ2 = 0, (14.119)
and hence
sinθ
Θ(θ)
∂
∂θsinθ
∂Θ(θ)
∂θ+λsin2θ =− 1
Φ(ϕ)
∂2Φ(ϕ)
∂ϕ2 = m2, (14.120)
where m is some constant.
14.9.4 Solution of the equation for the azimuthal angle factorΦ(ϕ)
The resulting differential equation forΦ(ϕ)
d 2Φ(ϕ)
dϕ2 =−m2Φ(ϕ), (14.121)
has the general solution
Φ(ϕ) = Ae i mϕ+Be−i mϕ. (14.122)
AsΦmust obey the periodic boundary conditionsΦ(ϕ) = Φ(ϕ+ 2π), m
must be an integer. The two constants A,B must be equal if we require the
system of functions e i mϕ|m ∈ Z to be orthonormalized. Indeed, if we
define
Φm(ϕ) = Ae i mϕ (14.123)
and require that it is normalized, it follows that∫ 2π
0Φm(ϕ)Φm(ϕ)dϕ
=∫ 2π
0Ae−i mϕAe i mϕdϕ
=∫ 2π
0|A|2dϕ
= 2π|A|2
= 1,
(14.124)
it is consistent to set
A = 1p2π
; (14.125)
and hence,
Φm(ϕ) = e i mϕ
p2π
(14.126)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 243
Note that, for different m 6= n,
∫ 2π
0Φn(ϕ)Φm(ϕ)dϕ
=∫ 2π
0
e−i nϕ
p2π
e i mϕ
p2π
dϕ
=∫ 2π
0
e i (m−n)ϕ
2πdϕ
= − i e i (m−n)ϕ
2(m −n)π
∣∣∣∣ϕ=2π
ϕ=0
= 0,
(14.127)
because m −n ∈Z.
14.9.5 Solution of the equation for the polar angle factorΘ(θ)
The left hand side of Eq. (14.120) contains only the polar coordinate. Upon
division by sin2θ we obtain
1
Θ(θ)sinθ
d
dθsinθ
dΘ(θ)
dθ+λ= m2
sin2θ, or
1
Θ(θ)sinθ
d
dθsinθ
dΘ(θ)
dθ− m2
sin2θ=−λ,
(14.128)
Now, first, let us consider the case m = 0. With the variable substitution
x = cosθ, and thus d xdθ =−sinθ and d x =−sinθdθ, we obtain from (14.128)
d
d xsin2θ
dΘ(x)
d x=−λΘ(x),
d
d x(1−x2)
dΘ(x)
d x+λΘ(x) = 0,
(x2 −1
) d 2Θ(x)
d x2 +2xdΘ(x)
d x=λΘ(x),
(14.129)
which is of the same form as the Legendre equation (14.92) mentioned on
page 234.
Consider the series Ansatz
Θ(x) = a0 +a1x +a2x2 +·· ·+ak xk +·· · (14.130)
for solving (14.129). Insertion into (14.129) and comparing the coefficients This is actually a “shortcut” solution of theFuchian Equation mentioned earlier.
244 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
of x for equal degrees yields the recursion relation
(x2 −1
) d 2
d x2 [a0 +a1x +a2x2 +·· ·+ak xk +·· · ]
+2xd
d x[a0 +a1x +a2x2 +·· ·+ak xk +·· · ]
=λ[a0 +a1x +a2x2 +·· ·+ak xk +·· · ],(x2 −1
)[2a2 +·· ·+k(k −1)ak xk−2 +·· · ]
+[2xa1 +2x2a2x +·· ·+2xkak xk−1 +·· · ]=λ[a0 +a1x +a2x2 +·· ·+ak xk +·· · ],(
x2 −1)
[2a2 +·· ·+k(k −1)ak xk−2 +·· · ]+[2a1x +4a2x2 +·· ·+2kak xk +·· · ]
=λ[a0 +a1x +a2x2 +·· ·+ak xk +·· · ],[2a2x2 +·· ·+k(k −1)ak xk +·· · ]
−[2a2 +·· ·+k(k −1)ak xk−2 +·· · ]+[2a1x +4a2x2 +·· ·+2kak xk +·· · ]
=λ[a0 +a1x +a2x2 +·· ·+ak xk +·· · ],[2a2x2 +·· ·+k(k −1)ak xk +·· · ]
−[2a2 +·· ·+k(k −1)ak xk−2
+(k +1)kak+1xk−1 + (k +2)(k +1)ak+2xk +·· · ]+[2a1x +4a2x2 +·· ·+2kak xk +·· · ]
=λ[a0 +a1x +a2x2 +·· ·+ak xk +·· · ],
(14.131)
and thus, by taking all polynomials of the order of k and proportional to xk ,
so that, for xk 6= 0 (and thus excluding the trivial solution),
k(k −1)ak xk − (k +2)(k +1)ak+2xk +2kak xk −λak xk = 0,
k(k +1)ak − (k +2)(k +1)ak+2 −λak = 0,
ak+2 = akk(k +1)−λ
(k +2)(k +1).
(14.132)
In order to converge also for x = ±1, and hence for θ = 0 and θ = π, the
sum in (14.130) has to have only a finite number of terms. Because if the
sum would be infinite, the terms ak , for large k, would be dominated by
ak−2O(k2/k2) = ak−2O(1), and thus would converge to akk→∞−→ a∞ with
constant a∞ 6= 0, and thusΘwould diverge asΘ(1)k→∞≈ ka∞
k→∞−→ ∞ . That
means that, in Eq. (14.132) for some k = l ∈N, the coefficient al+2 = 0 has
to vanish; thus
λ= l (l +1). (14.133)
This results in Legendre polynomialsΘ(x) ≡ Pl (x).
Let us now shortly mention the case m 6= 0. With the same variable
substitution x = cosθ, and thus d xdθ = −sinθ and d x = −sinθdθ as before,
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 245
the equation for the polar angle dependent factor (14.128) becomesd
d x(1−x2)
d
d x+ l (l +1)− m2
1−x2
Θ(x) = 0, (14.134)
This is exactly the form of the general Legendre equation (14.104), whose
solution is a multiple of the associated Legendre polynomialΘml (x) ≡
P ml (x), with |m| ≤ l .
14.9.6 Solution of the equation for radial factor R(r )
The solution of the equation (14.117)d
drr 2 d
dr+ 2µr 2
—h2
(e2
4πε0r+E
)R(r ) = l (l +1)R(r ) , or
− 1
R(r )
d
drr 2 d
drR(r )+ l (l +1)−2
µe2
4πε0—h2 r = 2µ
—h2 r 2E
(14.135)
for the radial factor R(r ) turned out to be the most difficult part for Schrödinger15. 15 Walter Moore. Schrödinger life and
thought. Cambridge University Press,Cambridge, UK, 1989
Note that, since the additive term l (l +1) in (14.135) is non-dimensional,
so must be the other terms. We can make this more explicit by the substitu-
tion of variables.
First, consider y = ra0
obtained by dividing r by the Bohr radius
a0 = 4πε0—h2
me e2 ≈ 5 10−11m, (14.136)
thereby assuming that the reduced mass is equal to the electron mass
µ ≈ me . More explicitly, r = y 4πε0 —h2
me e2 , or y = r me e2
4πε0 —h2 . Furthermore, let us
define ε= E2µa2
0
—h2 .
These substitutions yield
− 1
R(y)
d
d yy2 d
d yR(y)+ l (l +1)−2y = y2ε, or
−y2 d 2
d y2 R(y)−2yd
d yR(y)+ [
l (l +1)−2y −εy2]R(y) = 0.
(14.137)
Now we introduce a new function R via
R(ξ) = ξl e−12 ξR(ξ), (14.138)
with ξ = 2yn and by replacing the energy variable with ε =− 1
n2 . (It will later
be argued that ε must be dicrete; with n ∈N−0.) This yields
ξd 2
dξ2 R(ξ)+ [2(l +1)−ξ]d
dξR(ξ)+ (n − l −1)R(ξ) = 0. (14.139)
The discretization of n can again be motivated by requiring physical
properties from the solution; in particular, convergence. Consider again a
series solution Ansatz
R(ξ) = c0 + c1ξ+ c2ξ2 +·· ·+ckξ
k +·· · , (14.140)
246 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
which, when inserted into (14.137), yields
ξd 2
d 2ξ[c0 + c1ξ+ c2ξ
2 +·· ·+ckξk +·· · ]
+[2(l +1)−ξ]d
d y[c0 + c1ξ+ c2ξ
2 +·· ·+ckξk +·· · ]
+(n − l −1)[c0 + c1ξ+ c2ξ2 +·· ·+ckξ
k +·· · ]= 0,
ξ[2c2 +·· ·k(k −1)ckξk−2 +·· · ]
+[2(l +1)−ξ][c1 +2c2ξ+·· ·+kckξk−1 +·· · ]
+(n − l −1)[c0 + c1ξ+ c2ξ2 +·· ·+ckξ
k +·· · ]= 0,
[2c2ξ+·· ·+k(k −1)ckξk−1 +·· · ]
+2(l +1)[c1 +2c2ξ+·· ·+k + ckξk−1 +·· · ]
−[c1ξ+2c2ξ2 +·· ·+kckξ
k +·· · ]+(n − l −1)[c0 + c1ξ+ c2ξ
2 +·· ·ckξk +·· · ]
= 0,
[2c2ξ+·· ·+k(k −1)ckξk−1 +k(k +1)ck+1ξ
k +·· · ]+2(l +1)[c1 +2c2ξ+·· ·+kckξ
k−1 + (k +1)ck+1ξk +·· · ]
−[c1ξ+2c2ξ2 +·· ·+kckξ
k +·· · ]+(n − l −1)[c0 + c1ξ+ c2ξ
2 +·· ·+ckξk +·· · ]
= 0,
(14.141)
so that, by comparing the coefficients of ξk , we obtain
k(k +1)ck+1ξk +2(l +1)(k +1)ck+1ξ
k = kckξk − (n − l −1)ckξ
k ,
ck+1[k(k +1)+2(l +1)(k +1)] = ck [k − (n − l −1)],
ck+1(k +1)(k +2l +2) = ck (k −n + l +1),
ck+1 = ckk −n + l +1
(k +1)(k +2l +2).
(14.142)
Because of convergence of R and thus of R – note that, for large ξ and k,
the k’th term in Eq. (14.140) determining R(ξ) would behave as ξk /k! and
thus R(ξ) would roughly behave as eξ – the series solution (14.140) should
terminate at some k = n − l − 1, or n = k + l + 1. Since k, l , and 1 are all
integers, n must be an integer as well. And since k ≥ 0, n must be at least
l +1, or
l ≤ n −1. (14.143)
Thus, we end up with an associated Laguerre equation of the formξ
d 2
dξ2 + [2(l +1)−ξ]d
dξ+ (n − l −1)
R(ξ) = 0, with n ≥ l +1, and n, l ∈Z.
(14.144)
S P E C I A L F U N C T I O N S O F M AT H E M AT I C A L P H Y S I C S 247
Its solutions are the associated Laguerre polynomials L2l+1n+l which are the
(2l +1)-th derivatives of the Laguerre’s polynomials L2l+1n+l ; that is,
Ln(x) = ex d n
d xn
(xne−x)
,
Lmn (x) = d m
d xm Ln(x).
(14.145)
This yields a normalized wave function
Rn(r ) =N
(2r
na0
)l
e− r
a0n L2l+1n+l
(2r
na0
), with
N =− 2
n2
√(n − l −1)!
[(n + l )!a0]3 ,
(14.146)
where N stands for the normalization factor.
14.9.7 Composition of the general solution of the Schrödinger Equation
Now we shall coagulate and combine the factorized solutions (14.113) into Always remember the alchemic principleof solve et coagula!a complete solution of the Schrödinger equation
ψn,l ,m(r ,θ,ϕ)
= Rn(r )Y ml (θ,ϕ)
= Rn(r )Θml (θ)Φm(ϕ)
=− 2
n2
√(n − l −1)!
[(n + l )!a0]3
(2r
na0
)l
e− r
a0n L2l+1n+l
(2r
na0
)P m
l (x)e i mϕ
p2π
.
(14.147)
[
15
Divergent series
In this final chapter we will consider divergent series, which, as has already
been mentioned earlier, seem to have been “invented by the devil” 1. Un- 1 Godfrey Harold Hardy. Divergent Series.Oxford University Press, 1949fortunately such series occur very often in physical situations; for instance
in celestial mechanics or in quantum field theory 2, and one may wonder 2 John P. Boyd. The devil’s invention:Asymptotic, superasymptotic and hy-perasymptotic series. Acta ApplicandaeMathematica, 56:1–98, 1999. ISSN 0167-8019. D O I : 10.1023/A:1006145903624.URL http://dx.doi.org/10.1023/A:
1006145903624; Freeman J. Dyson. Diver-gence of perturbation theory in quantumelectrodynamics. Phys. Rev., 85(4):631–632,Feb 1952. D O I : 10.1103/PhysRev.85.631.URL http://dx.doi.org/10.1103/
PhysRev.85.631; Sergio A. Pernice andGerardo Oleaga. Divergence of perturba-tion theory: Steps towards a convergentseries. Physical Review D, 57:1144–1158,Jan 1998. D O I : 10.1103/PhysRevD.57.1144.URL http://dx.doi.org/10.1103/
PhysRevD.57.1144; and Ulrich D.Jentschura. Resummation of nonalter-nating divergent perturbative expansions.Physical Review D, 62:076001, Aug 2000.D O I : 10.1103/PhysRevD.62.076001. URLhttp://dx.doi.org/10.1103/PhysRevD.
62.076001
with Abel why, “for the most part, it is true that the results are correct, which
is very strange” 3. On the other hand, there appears to be another view on
3 Christiane Rousseau. Divergent series:Past, present, future . . .. preprint, 2004.URL http://www.dms.umontreal.ca/
~rousseac/divergent.pdf
diverging series, a view that has been expressed by Berry as follows 4: “. . .
4 Michael Berry. Asymptotics, superasymp-totics, hyperasymptotics... In HarveySegur, Saleh Tanveer, and Herbert Levine,editors, Asymptotics beyond All Orders,volume 284 of NATO ASI Series, pages1–14. Springer, 1992. ISBN 978-1-4757-0437-2. D O I : 10.1007/978-1-4757-0435-8.URL http://dx.doi.org/10.1007/
978-1-4757-0435-8
an asymptotic series . . . is a compact encoding of a function, and its diver-
gence should be regarded not as a deficiency but as a source of information
about the function.”
15.1 Convergence and divergence
Let us first define convergence in the context of series. A series
∞∑j=0
a j = a0 +a1 +a2 +·· · (15.1)
is said to converge to the sum s, if the partial sum
sn =n∑
j=0a j = a0 +a1 +a2 +·· ·+an (15.2)
tends to a finite limit s when n →∞; otherwise it is said to be divergent.
One of the most prominent series is the Leibniz series 5
5 Gottfried Wilhelm Leibniz. Letters LXX,LXXI. In Carl Immanuel Gerhardt, editor,Briefwechsel zwischen Leibniz und Chris-tian Wolf. Handschriften der KöniglichenBibliothek zu Hannover,. H. W. Schmidt,Halle, 1860. URL http://books.google.
de/books?id=TUkJAAAAQAAJ; Charles N.Moore. Summable Series and ConvergenceFactors. American Mathematical Society,New York, NY, 1938; Godfrey Harold Hardy.Divergent Series. Oxford University Press,1949; and Graham Everest, Alf van derPoorten, Igor Shparlinski, and ThomasWard. Recurrence sequences. Volume104 in the AMS Surveys and Monographsseries. American mathematical Society,Providence, RI, 2003
s =∞∑
j=0(−1) j = 1−1+1−1+1−·· · , (15.3)
whose summands may be – inconsistently – “rearranged,” yielding
either 1−1+1−1+1−1+·· · = (1−1)+ (1−1)+ (1−1)−·· · = 0
or 1−1+1−1+1−1+·· · = 1+ (−1+1)+ (−1+1)+·· · = 1.
Note that, by Riemann’s rearrangement theorem, even convergent series
which do not absolutely converge (i.e.,∑n
j=0 a j converges but∑n
j=0
∣∣a j∣∣
250 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
diverges) can converge to any arbitrary (even infinite) value by permuting
(rearranging) the (ratio of) positive and negative terms (the series of which
must both be divergent).
The Leibniz series is a particular case q =−1 of a geometric series
s =∞∑
j=0q j = 1+q +q2 +q3 +·· · = 1+qs (15.4)
which, since s = 1+qs, converges to
s =∞∑
j=0q j = 1
1−q(15.5)
for |q | < 1. One way to sum the Leibnitz series is by “continuing” Eq. (15.5)
for arbitrary q 6= 1, thereby defining the Abel sum
∞∑j=0
(−1) j A= 1
1− (−1)= 1
2. (15.6)
Another divergent series, which can be obtained by formally expanding
the square of the abel sum of the Leibnitz series s2 A= (1+ x)−2 around 0 and
inserting x = 1 6 is 6 Morris Kline. Euler and infi-nite series. Mathematics Maga-zine, 56(5):307–314, 1983. ISSN0025570X. D O I : 10.2307/2690371. URLhttp://dx.doi.org/10.2307/2690371
s2 =( ∞∑
j=0(−1) j
)( ∞∑k=0
(−1)k
)=
∞∑j=0
(−1) j+1 j = 0+1−2+3−4+5−·· · . (15.7)
In the same sense as the Leibnitz series, this yields the Abel sum s2 A= 1/4.
Note that the sequence of its partial sums s2n = ∑n
j=0(−1) j+1 j yield
every integer once; that is, s20 = 0, s2
1 = 0 + 1 = 1, s22 = 0 + 1 − 2 = −1,
s23 = 0+ 1− 2+ 3 = 2, s2
4 = 0+ 1− 2+ 3− 4− 2, . . ., s2n = −n
2 for even n,
and s2n = −n+1
2 for odd n. It thus establishes a strict one-to-one mapping
s2 :N 7→Z of the natural numbers onto the integers.
15.2 Euler differential equation
In what follows we demonstrate that divergent series may make sense, in
the way Abel wondered. That is, we shall show that the first partial sums
of divergent series may yield “good” approximations of the exact result;
and that, from a certain point onward, more terms contributing to the
sum might worsen the approximation rather an make it better – a situation
totally different from convergent series, where more terms always result in
better approximations.
Let us, with Rousseau, for the sake of demonstration of the former
situation, consider the Euler differential equation(x2 d
d x+1
)y(x) = x, or
(d
d x+ 1
x2
)y(x) = 1
x. (15.8)
D I V E RG E N T S E R I E S 251
We shall solve this equation by two methods: we shall, on the one hand,
present a divergent series solution, and on the other hand, an exact solu-
tion. Then we shall compare the series approximation to the exact solution
by considering the difference.
A series solution of the Euler differential equation can be given by
ys (x) =∞∑
j=0(−1) j j !x j+1. (15.9)
That (15.9) solves (15.8) can be seen by inserting the former into the latter;
that is, (x2 d
d x+1
) ∞∑j=0
(−1) j j !x j+1 = x,
∞∑j=0
(−1) j ( j +1)!x j+2 +∞∑
j=0(−1) j j !x j+1 = x,
[change of variable in the first sum: j → j −1 ]∞∑
j=1(−1) j−1( j +1−1)!x j+2−1 +
∞∑j=0
(−1) j j !x j+1 = x,
∞∑j=1
(−1) j−1 j !x j+1 +x +∞∑
j=1(−1) j j !x j+1 = x,
x +∞∑
j=1(−1) j [
(−1)−1 +1]
j !x j+1 = x,
x +∞∑
j=1(−1) j [−1+1]︸ ︷︷ ︸
=0
j !x j+1 = x,
x = x.
(15.10)
On the other hand, an exact solution can be found by quadrature; that
is, by explicit integration (see, for instance, Chapter one of Ref. 7). Consider 7 Garrett Birkhoff and Gian-Carlo Rota.Ordinary Differential Equations. JohnWiley & Sons, New York, Chichester,Brisbane, Toronto, fourth edition, 1959,1960, 1962, 1969, 1978, and 1989
the homogenuous first order differential equation
(d
d x+p(x)
)y(x) = 0,
ord y(x)
d x=−p(x)y(x),
ord y(x)
y(x)=−p(x)d x.
(15.11)
Integrating both sides yields
log |y(x)| = −∫
p(x)d x +C , or |y(x)| = K e−∫
p(x)d x , (15.12)
where C is some constant, and K = eC . Let P (x) = ∫p(x)d x. Hence, heuris-
tically, y(x)eP (x) is constant, as can also be seen by explicit differentiation
252 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
of y(x)eP (x); that is,
d
d xy(x)eP (x) = eP (x) d y(x)
d x+ y(x)
d
d xeP (x)
= eP (x) d y(x)
d x+ y(x)p(x)eP (x)
= eP (x)(
d
d x+p(x)
)y(x)
= 0
(15.13)
if and, since eP (x) 6= 0, only if y(x) satisfies the homogenuous equation
(15.11). Hence,
y(x) = ce−∫
p(x)d x is the solution of(d
d x+p(x)
)y(x) = 0
(15.14)
for some constant c.
Similarly, we can again find a solution to the inhomogenuos first order
differential equation
(d
d x+p(x)
)y(x)+q(x) = 0,
or
(d
d x+p(x)
)y(x) =−q(x)
(15.15)
by differentiating the function y(x)eP (x) = y(x)e∫
p(x)d x ; that is,
d
d xy(x)e
∫p(x)d x = e
∫p(x)d x d
d xy(x)+p(x)e
∫p(x)d x y(x)
= e∫
p(x)d x(
d
d x+p(x)
)y(x)︸ ︷︷ ︸
=q(x)
=−e∫
p(x)d x q(x).
(15.16)
Hence, for some constant y0 and some a,b, we must have, by integration,
∫ x
b
d
d ty(t )e
∫ ta p(s)d s d t = y(x)e
∫ xa p(t )d t
= y0 −∫ x
be
∫ ta p(s)d s q(t )d t ,
and hence y(x) = y0e−∫ x
a p(t )d t −e−∫ x
a p(t )d t∫ x
be
∫ ta p(s)d s q(t )d t .
(15.17)
If a = b, then y(b) = y0.
Coming back to the Euler differential equation and identifying p(x) =1/x2 and q(x) =−1/x we obtain, up to a constant, with b = 0 and arbitrary
D I V E RG E N T S E R I E S 253
constant a 6= 0,
y(x) =−e−∫ x
ad tt2
∫ x
0e
∫ ta
d ss2
(−1
t
)d t
= e−(− 1
t
)∣∣xa
∫ x
0e − 1
s
∣∣ta
(1
t
)d t
= e1x − 1
a
∫ x
0e−
1t + 1
a
(1
t
)d t
= e1x e−
1a + 1
a︸ ︷︷ ︸=e0=1
∫ x
0e−
1t
(1
t
)d t
= e1x e−
1a
∫ x
0e−
1t e
1a
(1
t
)d t
= e1x
∫ x
0
e−1t
td t
=∫ x
0
e1x − 1
t
td t .
(15.18)
With a change of the integration variable
ξ
x= 1
t− 1
x, and thus ξ= x
t−1, and t = x
1+ξ ,
d t
dξ=− x
(1+ξ)2 , and thus d t =− x
(1+ξ)2 dξ,
and thusd t
t=
− x(1+ξ)2
x1+ξ
dξ=− dξ
1+ξ ,
(15.19)
the integral (15.18) can be rewritten as
y(x) =∫ 0
∞
(− e−
ξx
1+ξ
)dξ=
∫ ∞
0
e−ξx
1+ξdξ. (15.20)
It is proportional to the Stieltjes Integral 8 8 Carl M. Bender Steven A. Orszag. And-vanced Mathematical Methods for Scien-tists and Enineers. McGraw-Hill, New York,NY, 1978; and John P. Boyd. The devil’sinvention: Asymptotic, superasymptoticand hyperasymptotic series. Acta Appli-candae Mathematica, 56:1–98, 1999. ISSN0167-8019. D O I : 10.1023/A:1006145903624.URL http://dx.doi.org/10.1023/A:
1006145903624
S(x) =∫ ∞
0
e−ξ
1+xξdξ. (15.21)
Note that whereas the series solution ys (x) diverges for all nonzero x,
the solution y(x) in (15.20) converges and is well defined for all x ≥ 0.
Let us now estimate the absolute difference between ysk (x) which repre-
sents the partial sum “ys (x) truncated after the kth term” and y(x); that is,
let us consider
|y(x)− ysk (x)| =∣∣∣∣∣∫ ∞
0
e−ξx
1+ξdξ−k∑
j=0(−1) j j !x j+1
∣∣∣∣∣ . (15.22)
For any x ≥ 0 this difference can be estimated 9 by a bound from above 9 Christiane Rousseau. Divergent series:Past, present, future . . .. preprint, 2004.URL http://www.dms.umontreal.ca/
~rousseac/divergent.pdf|Rk (x)| def= |y(x)− ysk (x)| ≤ k!xk+1, (15.23)
254 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
that is, this difference between the exact solution y(x) and the diverging
partial series ysk (x) is smaller than the first neglected term; and all subse-
quent ones.
For a proof, observe that, since a partial geometric series is the sum of all
the numbers in a geometric progression up to a certain power; that is,
n∑k=0
r k = 1+ r + r 2 +·· ·+ r k +·· ·+ r n . (15.24)
By multiplying both sides with 1− r , the sum (15.24) can be rewritten as
(1− r )n∑
k=0r k = (1−k)(1+ r + r 2 +·· ·+ r k +·· ·+ r n)
= 1+ r + r 2 +·· ·+ r k +·· ·+ r n − r (1+ r + r 2 +·· ·+ r k +·· ·+ r n + r n)
= 1+ r + r 2 +·· ·+ r k +·· ·+ r n − (r + r 2 +·· ·+ r k +·· ·+ r n + r n+1)
= 1− r n+1,
(15.25)
and, since the middle terms all cancel out,
n∑k=0
r k = 1− r n+1
1− r, or
n−1∑k=0
r k = 1− r n
1− r= 1
1− r− r n
1− r. (15.26)
Thus, for r =−ζ, it is true that
1
1+ζ =n−1∑k=0
(−1)kζk + (−1)n ζn
1+ζ . (15.27)
Thus
f (x) =∫ ∞
0
e−ζx
1+ζdζ
=∫ ∞
0e−
ζx
(n−1∑k=0
(−1)kζk + (−1)n ζn
1+ζ
)dζ
=n−1∑k=0
∫ ∞
0(−1)kζk e−
ζx dζ+
∫ ∞
0(−1)n ζ
ne−ζx
1+ζ dζ.
(15.28)
Since
k! = Γ(k +1) =∫ ∞
0zk e−z d z, (15.29)
one obtains ∫ ∞
0ζk e−
ζx dζ
[substitution: z = ζ
x,dζ= xd z ]
=∫ ∞
0xk+1zk e−z d z
= xk+1k!,
(15.30)
D I V E RG E N T S E R I E S 255
and hence
f (x) =n−1∑k=0
∫ ∞
0(−1)kζk e−
ζx dζ+
∫ ∞
0(−1)n ζ
ne−ζx
1+ζ dζ
=n−1∑k=0
(−1)k xk+1k!+∫ ∞
0(−1)n ζ
ne−ζx
1+ζ dζ
= fn(x)+Rn(x),
(15.31)
where fn(x) represents the partial sum of the power series, and Rn(x)
stands for the remainder, the difference between f (x) and fn(x). The
absolute of the remainder can be estimated by
|Rn(x)| =∫ ∞
0
ζne−ζx
1+ζ≤
∫ ∞
0ζne−
ζx
= n!xn+1.
(15.32)
15.2.1 Borel’s resummation method – “The Master forbids it”
In what follows we shall again follow Christiane Rousseau’s treatment10 and use a resummation method invented by Borel 11 to obtain the 10 Christiane Rousseau. Divergent series:
Past, present, future . . .. preprint, 2004.URL http://www.dms.umontreal.ca/
~rousseac/divergent.pdf
For more resummation techniques, pleasesee Chapter 16 of
Hagen Kleinert and Verena Schulte-Frohlinde. Critical Properties of φ4-Theories. World scientific, Singapore, 2001.ISBN 981024659511 Émile Borel. Mémoire sur les sériesdivergentes. Annales scientifiques de l’ÉcoleNormale Supérieure, 16:9–131, 1899. URLhttp://eudml.org/doc/81143
“The idea that a function could be deter-mined by a divergent asymptotic series wasa foreign one to the nineteenth centurymind. Borel, then an unknown young man,discovered that his summation methodgave the “right” answer for many classicaldivergent series. He decided to make a pil-grimage to Stockholm to see Mittag-Leffler,who was the recognized lord of complexanalysis. Mittag-Leffler listened politelyto what Borel had to say and then, plac-ing his hand upon the complete works byWeierstrass, his teacher, he said in Latin,“The Master forbids it.” A tale of Mark Kac,”quoted (on page 38) by
Michael Reed and Barry Simon. Methodsof Modern Mathematical Physics IV:Analysis of Operators. Academic Press, NewYork, 1978
exact convergent solution (15.20) of the Euler differential equation (15.8)
from the divergent series solution (15.9). First we can rewrite a suitable
infinite series by an integral representation, thereby using the integral
representation of the factorial (15.29) as follows:
∞∑j=0
a j =∞∑
j=0a j
j !j !
=∞∑
j=0
a j
j !j !
=∞∑
j=0
a j
j !
∫ ∞
0t j e−t d t
B=∫ ∞
0
( ∞∑j=0
a j t j
j !
)e−t d t .
(15.33)
A series∑∞
j=0 a j is Borel summable if∑∞
j=0a j t j
j ! has a non-zero radius of
convergence, if it can be extended along the positive real axis and if the
integral (15.33) is convergent. This integral is called the Borel sum of the
series.
In the case of the series solution of the Euler differential equation, a j =(−1) j j !x j+1 [cf. Eq. (15.9)]. Thus,
∞∑j=0
a j t j
j !=
∞∑j=0
(−1) j j !x j+1t j
j != x
∞∑j=0
(−xt ) j = x
1+xt, (15.34)
and therefore, with the substitionion xt = ζ, d t = dζx
∞∑j=0
(−1) j j !x j+1 B=∫ ∞
0
∞∑j=0
a j t j
j !e−t d t =
∫ ∞
0
x
1+xte−t d t =
∫ ∞
0
e−ζx
1+ζdζ,
(15.35)
256 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
which is the exact solution (15.20) of the Euler differential equation (15.8).
We can also find the Borel sum (which in this case is equal to the Abel
sum) of the Leibniz series (15.3) by
s =∞∑
j=0(−1) j B=
∫ ∞
0
( ∞∑j=0
(−1) j t j
j !
)e−t d t
=∫ ∞
0
( ∞∑j=0
(−t ) j
j !
)e−t d t =
∫ ∞
0e−2t d t
[variable substitution 2t = ζ,d t = 1
2dζ]
= 1
2
∫ ∞
0e−ζdζ
= 1
2
(−e−ζ
)∣∣∣∞ζ=0
= 1
2
(−e−∞+e−0)= 1
2.
(15.36)
A similar calculation for s2 defined in Eq. (15.7) yields
s2 =∞∑
j=0(−1) j+1 j = (−1)
∞∑j=1
(−1) j jB=−
∫ ∞
0
( ∞∑j=1
(−1) j j t j
j !
)e−t d t
=−∫ ∞
0
( ∞∑j=1
(−t ) j
( j −1)!
)e−t d t
=−∫ ∞
0
( ∞∑j=0
(−t ) j+1
j !
)e−t d t
=−∫ ∞
0(−t )
( ∞∑j=0
(−t ) j
j !
)e−t d t
=−∫ ∞
0(−t )e−2t d t
[variable substitution 2t = ζ,d t = 1
2dζ]
= 1
4
∫ ∞
0ζe−ζdζ
= 1
4Γ(2) = 1
41! = 1
4,
(15.37)
which is again equal to the Abel sum.
c
Appendix
A
Hilbert space quantum mechanics and quantum logic
A.1 Quantum mechanics
The following is a very brief introduction to quantum mechanics. Introduc-
tions to quantum mechanics can be found in Refs. 1. 1 Richard Phillips Feynman, Robert B.Leighton, and Matthew Sands. TheFeynman Lectures on Physics. QuantumMechanics, volume III. Addison-Wesley,Reading, MA, 1965; L. E. Ballentine.Quantum Mechanics. Prentice Hall,Englewood Cliffs, NJ, 1989; A. Messiah.Quantum Mechanics, volume I. North-Holland, Amsterdam, 1962; Asher Peres.Quantum Theory: Concepts and Methods.Kluwer Academic Publishers, Dordrecht,1993; and John Archibald Wheeler andWojciech Hubert Zurek. Quantum Theoryand Measurement. Princeton UniversityPress, Princeton, NJ, 1983
All quantum mechanical entities are represented by objects of Hilbert
spaces 2. The following identifications between physical and theoretical
2 John von Neumann. MathematischeGrundlagen der Quantenmechanik.Springer, Berlin, 1932. English translationin Ref. ; and Garrett Birkhoff and Johnvon Neumann. The logic of quantummechanics. Annals of Mathematics, 37(4):823–843, 1936. D O I : 10.2307/1968621. URLhttp://dx.doi.org/10.2307/1968621
objects are made (a caveat: this is an incomplete list).
In what follows, unless stated differently, only finite dimensional Hilbert
spaces are considered. Then, the vectors corresponding to states can be
written as usual vectors in complex Hilbert space. Furthermore, bounded
self-adjoint operators are equivalent to bounded Hermitean operators.
They can be represented by matrices, and the self-adjoint conjugation is
just transposition and complex conjugation of the matrix elements. Let
B= b1,b2, . . . ,bn be an orthonormal basis in n-dimensional Hilbert space
H. That is, orthonormal base vectors in B satisfy ⟨bi ,b j ⟩ = δi j , where δi j is
the Kronecker delta function.
(I) A quantum state is represented by a positive Hermitian operator ρ of
trace class one in the Hilbert space H; that is
(i) ρ† = ρ = ∑ni=1 pi |bi ⟩⟨bi |, with pi ≥ 0 for all i = 1, . . . ,n, bi ∈B, and∑n
i=1 pi = 1, so that
(ii) ⟨ρx | x⟩ = ⟨x | ρx⟩ ≥ 0,
(iii) Tr(ρ) =∑ni=1⟨bi | ρ | bi ⟩ = 1.
A pure state is represented by a (unit) vector x, also denoted by | x⟩, of
the Hilbert space H spanning a one-dimensional subspace (manifold)
Mx of the Hilbert space H. Equivalently, it is represented by the one-
dimensional subspace (manifold) Mx of the Hilbert space H spannned
by the vector x. Equivalently, it is represented by the projector Ex =|x⟩⟨x | onto the unit vector x of the Hilbert space H.
Therefore, if two vectors x,y ∈H represent pure states, their vector sum
z = x+ y ∈ H represents a pure state as well. This state z is called the
260 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
coherent superposition of state x and y. Coherent state superpositions
between classically mutually exclusive (i.e. orthogonal) states, say | 0⟩and | 1⟩, will become most important in quantum information theory.
Any pure state x can be written as a linear combination of the set of
orthonormal base vectors b1,b2, · · ·bn, that is, x =∑ni=1βi bi , where n is
the dimension of H and βi = ⟨bi | x⟩ ∈C.
In the Dirac bra-ket notation, unity is given by 1 = ∑ni=1 |bi ⟩⟨bi |, or just
1 =∑ni=1 |i ⟩⟨i |.
(II) Observables are represented by self-adjoint or, synonymuously, Hermi-
tian, operators or transformations A = A† on the Hilbert space H such
that ⟨Ax | y⟩ = ⟨x | Ay⟩ for all x,y ∈H. (Observables and their correspond-
ing operators are identified.)
The trace of an operator A is given by TrA=∑ni=1⟨bi |A|bi ⟩.
Furthermore, any Hermitian operator has a spectral representation as
a spectral sum A = ∑ni=1αi Ei , where the Ei ’s are orthogonal projection
operators onto the orthonormal eigenvectors ai of A (nondegenerate
case).
Observables are said to be compatible if they can be defined simultane-
ously with arbitrary accuracy; i.e., if they are “independent.” A criterion
for compatibility is the commutator. Two observables A,B are compati-
ble, if their commutator vanishes; that is, if [A,B] =AB−BA= 0.
It has recently been demonstrated that (by an analog embodiment using
particle beams) every Hermitian operator in a finite dimensional Hilbert
space can be experimentally realized 3. 3 M. Reck, Anton Zeilinger, H. J. Bernstein,and P. Bertani. Experimental realizationof any discrete unitary operator. PhysicalReview Letters, 73:58–61, 1994. D O I :10.1103/PhysRevLett.73.58. URL http:
//dx.doi.org/10.1103/PhysRevLett.
73.58
(III) The result of any single measurement of the observable A on a state
x ∈ H can only be one of the real eigenvalues of the corresponding
Hermitian operator A. If x is in a coherent superposition of eigenstates
of A, the particular outcome of any such single measurement is believed
to be indeterministic 4; that is, it cannot be predicted with certainty. As a 4 Max Born. Zur Quantenmechanik derStoßvorgänge. Zeitschrift für Physik, 37:863–867, 1926a. D O I : 10.1007/BF01397477.URL http://dx.doi.org/10.1007/
BF01397477; Max Born. Quantenmechanikder Stoßvorgänge. Zeitschrift für Physik, 38:803–827, 1926b. D O I : 10.1007/BF01397184.URL http://dx.doi.org/10.1007/
BF01397184; and Anton Zeilinger. Themessage of the quantum. Nature, 438:743, 2005. D O I : 10.1038/438743a. URLhttp://dx.doi.org/10.1038/438743a
result of the measurement, the system is in the state which corresponds
to the eigenvector ai of A with the associated real-valued eigenvalue αi ;
that is, Ax =αn an (no Einstein sum convention here).
This “transition” x → an has given rise to speculations concern-
ing the “collapse of the wave function (state).” But, subject to tech-
nology and in principle, it may be possible to reconstruct coherence;
that is, to “reverse the collapse of the wave function (state)” if the pro-
cess of measurement is reversible. After this reconstruction, no infor-
mation about the measurement must be left, not even in principle.
How did Schrödinger, the creator of wave mechanics, perceive the
ψ-function? In his 1935 paper “Die Gegenwärtige Situation in der Quan-
tenmechanik” (“The present situation in quantum mechanics” 5), on 5 Erwin Schrödinger. Die gegenwärtigeSituation in der Quantenmechanik.Naturwissenschaften, 23:807–812, 823–828,844–849, 1935b. D O I : 10.1007/BF01491891,10.1007/BF01491914, 10.1007/BF01491987.URL http://dx.doi.org/10.1007/
BF01491891,http://dx.doi.org/10.
1007/BF01491914,http://dx.doi.org/
10.1007/BF01491987
page 53, Schrödinger states, “the ψ-function as expectation-catalog:
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 261
. . . In it [[the ψ-function]] is embodied the momentarily-attained sum
of theoretically based future expectation, somewhat as laid down in
a catalog. . . . For each measurement one is required to ascribe to the
ψ-function (=the prediction catalog) a characteristic, quite sudden
change, which depends on the measurement result obtained, and so can-
not be foreseen; from which alone it is already quite clear that this sec-
ond kind of change of the ψ-function has nothing whatever in common
with its orderly development between two measurements. The abrupt
change [[of the ψ-function (=the prediction catalog)]] by measurement
. . . is the most interesting point of the entire theory. It is precisely the
point that demands the break with naive realism. For this reason one
cannot put the ψ-function directly in place of the model or of the physi-
cal thing. And indeed not because one might never dare impute abrupt
unforeseen changes to a physical thing or to a model, but because in the
realism point of view observation is a natural process like any other and
cannot per se bring about an interruption of the orderly flow of natural
events.” German original: “Die ψ-Funktion alsKatalog der Erwartung: . . . Sie [[die ψ-Funktion]] ist jetzt das Instrument zurVoraussage der Wahrscheinlichkeit vonMaßzahlen. In ihr ist die jeweils erreichteSumme theoretisch begründeter Zukun-ftserwartung verkörpert, gleichsam wie ineinem Katalog niedergelegt. . . . Bei jederMessung ist man genötigt, der ψ-Funktion(=dem Voraussagenkatalog) eine eige-nartige, etwas plötzliche Veränderungzuzuschreiben, die von der gefundenenMaßzahl abhängt und sich nicht vorherse-hen läßt; woraus allein schon deutlich ist,daß diese zweite Art von Veränderungder ψ-Funktion mit ihrem regelmäßigenAbrollen zwischen zwei Messungen nichtdas mindeste zu tun hat. Die abrupteVeränderung durch die Messung . . . ist derinteressanteste Punkt der ganzen Theorie.Es ist genau der Punkt, der den Bruchmit dem naiven Realismus verlangt. Ausdiesem Grund kann man die ψ-Funktionnicht direkt an die Stelle des Modells oderdes Realdings setzen. Und zwar nicht etwaweil man einem Realding oder einemModell nicht abrupte unvorhergeseheneÄnderungen zumuten dürfte, sondernweil vom realistischen Standpunkt dieBeobachtung ein Naturvorgang ist wiejeder andere und nicht per se eine Unter-brechung des regelmäßigen Naturlaufshervorrufen darf.
The late Schrödinger was much more polemic about these issues;
compare for instance his remarks in his Dublin Seminars (1949-1955),
published in Ref. 6, pages 19-20: “The idea that [the alternate measure-
6 Erwin Schrödinger. The Interpretationof Quantum Mechanics. Dublin Seminars(1949-1955) and Other Unpublished Essays.Ox Bow Press, Woodbridge, Connecticut,1995
ment outcomes] be not alternatives but all really happening simultane-
ously seems lunatic to [the quantum theorist], just impossible. He thinks
that if the laws of nature took this form for, let me say, a quarter of an
hour, we should find our surroundings rapidly turning into a quagmire,
a sort of a featureless jelly or plasma, all contours becoming blurred,
we ourselves probably becoming jelly fish. It is strange that he should
believe this. For I understand he grants that unobserved nature does
behave this way – namely according to the wave equation. . . . according
to the quantum theorist, nature is prevented from rapid jellification only
by our perceiving or observing it.”
(IV) The probability Px(y) to find a system represented by state ρx in
some pure state y is given by the Born rule which is derivable from
Gleason’s theorem: Px(y) = Tr(ρxEy). Recall that the density ρx is a
positive Hermitian operator of trace class one.
For pure states with ρ2x = ρx, ρx is a onedimensional projector ρx =
Ex = |x⟩⟨x| onto the unit vector x; thus expansion of the trace and Ey =|y⟩⟨y| yields Px(y) = ∑n
i=1⟨i | x⟩⟨x|y⟩⟨y | i ⟩ = ∑ni=1⟨y | i ⟩⟨i | x⟩⟨x|y⟩ =∑n
i=1⟨y | 1 | x⟩⟨x|y⟩ = |⟨y | x⟩|2.
(V) The average value or expectation value of an observable A in a quan-
tum state x is given by ⟨A⟩y = Tr(ρxA).
The average value or expectation value of an observable A=∑ni=1αi Ei
in a pure state x is given by ⟨A⟩x = ∑nj=1
∑ni=1αi ⟨ j | x⟩⟨x|ai ⟩⟨ai | j ⟩ =∑n
j=1
∑ni=1αi ⟨ai | j ⟩⟨ j | x⟩⟨x|ai ⟩ = ∑n
j=1
∑ni=1αi ⟨ai | 1 | x⟩⟨x|ai ⟩ =
262 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
∑ni=1αi |⟨x | ai ⟩|2.
(VI) The dynamical law or equation of motion can be written in the form
x(t ) = Ux(t0), where U† = U−1 (“† stands for transposition and complex
conjugation) is a linear unitary transformation or isometry.
The Schrödinger equation i —h ∂∂t ψ(t ) = Hψ(t ) is obtained by identify-
ing U with U = e−iHt/—h , where H is a self-adjoint Hamiltonian (“energy”)
operator, by differentiating the equation of motion with respect to the
time variable t .
For stationary ψn(t ) = e−(i /—h)En tψn , the Schrödinger equation
can be brought into its time-independent form Hψn = Enψn . Here,
i —h ∂∂t ψn(t ) = Enψn(t ) has been used; En and ψn stand for the n’th
eigenvalue and eigenstate of H, respectively.
Usually, a physical problem is defined by the Hamiltonian H. The
problem of finding the physically relevant states reduces to finding a
complete set of eigenvalues and eigenstates of H. Most elegant solutions
utilize the symmetries of the problem; that is, the symmetry of H. There
exist two “canonical” examples, the 1/r -potential and the harmonic
oscillator potential, which can be solved wonderfully by these methods
(and they are presented over and over again in standard courses of
quantum mechanics), but not many more. (See, for instance, 7 for a 7 A. S. Davydov. Quantum Mechanics.Addison-Wesley, Reading, MA, 1965detailed treatment of various Hamiltonians H.)
A.2 Quantum logic
The dimensionality of the Hilbert space for a given quantum system de-
pends on the number of possible mutually exclusive outcomes. In the
spin– 12 case, for example, there are two outcomes “up” and “down,” asso-
ciated with spin state measurements along arbitrary directions. Thus, the
dimensionality of Hilbert space needs to be two.
Then the following identifications can be made. Table A.1 lists the iden-
tifications of relations of operations of classical Boolean set-theoretic and
quantum Hillbert lattice types.
generic lattice order relation “meet” “join” “complement”propositional implication disjunction conjunction negation
calculus → “and” ∧ “or” ∨ “not” ¬“classical” lattice subset ⊂ intersection ∩ union ∪ complement
of subsetsof a setHilbert subspace intersection of closure of orthogonallattice relation subspaces ∩ linear subspace
⊂ span ⊕ ⊥lattice of E1E2 =E1 E1E2 E1 +E2 −E1E2 orthogonal
commuting projectionnoncommuting lim
n→∞(E1E2)n
projectionoperators
Table A.1: Comparison of the identifica-tions of lattice relations and operations forthe lattices of subsets of a set, for experi-mental propositional calculi, for Hilbertlattices, and for lattices of commutingprojection operators.
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 263
(i) Any closed linear subspace Mp spanned by a vector p in a Hilbert space
H – or, equivalently, any projection operator Ep = |p⟩⟨p| on a Hilbert
space H corresponds to an elementary proposition p. The elementary
“true”-“false” proposition can in English be spelled out explicitly as
“The physical system has a property corresponding to the associated
closed linear subspace.”
It is coded into the two eigenvalues 0 and 1 of the projector Ep (recall
that EpEp =Ep).
(ii) The logical “and” operation is identified with the set theoretical in-
tersection of two propositions “∩”; i.e., with the intersection of two
subspaces. It is denoted by the symbol “∧”. So, for two propositions p
and q and their associated closed linear subspaces Mp and Mq,
Mp∧q = x | x ∈Mp, x ∈Mq.
(iii) The logical “or” operation is identified with the closure of the linear
span “⊕” of the subspaces corresponding to the two propositions. It is
denoted by the symbol “∨”. So, for two propositions p and q and their
associated closed linear subspaces Mp and Mq,
Mp∨q =Mp ⊕Mq = x | x =αy+βz, α,β ∈C, y ∈Mp, z ∈Mq.
The symbol ⊕ will used to indicate the closed linear subspace spanned
by two vectors. That is,
u⊕v = w | w =αu+βv, α,β ∈C, u,v ∈H.
Notice that a vector of Hilbert space may be an element of Mp ⊕Mq
without being an element of either Mp or Mq, since Mp ⊕Mq includes
all the vectors in Mp ∪Mq, as well as all of their linear combinations
(superpositions) and their limit vectors.
(iv) The logical “not”-operation, or “negation” or “complement,” is identi-
fied with operation of taking the orthogonal subspace “⊥”. It is denoted
by the symbol “ ′ ”. In particular, for a proposition p and its associated
closed linear subspace Mp, the negation p ′ is associated with
Mp′ = x | ⟨x | y⟩ = 0, y ∈Mp,
where ⟨x | y⟩ denotes the scalar product of x and y.
(v) The logical “implication” relation is identified with the set theoretical
subset relation “⊂”. It is denoted by the symbol “→”. So, for two propo-
sitions p and q and their associated closed linear subspaces Mp and
Mq,
p → q ⇐⇒Mp ⊂Mq.
264 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
(vi) A trivial statement which is always “true” is denoted by 1. It is repre-
sented by the entire Hilbert space H. So,
M1 =H.
(vii) An absurd statement which is always “false” is denoted by 0. It is
represented by the zero vector 0. So,
M0 = 0.
A.3 Diagrammatical representation, blocks, complementarity
Propositional structures are often represented by Hasse and Greechie di-
agrams. A Hasse diagram is a convenient representation of the logical
implication, as well as of the “and” and “or” operations among proposi-
tions. Points “ • ” represent propositions. Propositions which are implied
by other ones are drawn higher than the other ones. Two propositions are
connected by a line if one implies the other. Atoms are propositions which
“cover” the least element 0; i.e., they lie “just above” 0 in a Hasse diagram of
the partial order.
A much more compact representation of the propositional calculus
can be given in terms of its Greechie diagram 8. In this representation, the 8 J. R. Greechie. Orthomodular latticesadmitting no states. Journal of Com-binatorial Theory, 10:119–132, 1971.D O I : 10.1016/0097-3165(71)90015-X.URL http://dx.doi.org/10.1016/
0097-3165(71)90015-X
emphasis is on Boolean subalgebras. Points “ ” represent the atoms. If
they belong to the same Boolean subalgebra, they are connected by edges
or smooth curves. The collection of all atoms and elements belonging to
the same Boolean subalgebra is called block; i.e., every block represents
a Boolean subalgebra within a nonboolean structure. The blocks can be
joined or pasted together as follows.
(i) The tautologies of all blocks are identified.
(ii) The absurdities of all blocks are identified.
(iii) Identical elements in different blocks are identified.
(iii) The logical and algebraic structures of all blocks remain intact.
This construction is often referred to as pasting construction. If the blocks
are only pasted together at the tautology and the absurdity, one calls the
resulting logic a horizontal sum.
Every single block represents some “maximal collection of co-measurable
observables” which will be identified with some quantum context. Hilbert
lattices can be thought of as the pasting of a continuity of such blocks or
contexts.
Note that whereas all propositions within a given block or context are
co-measurable; propositions belonging to different blocks are not. This
latter feature is an expression of complementarity. Thus from a strictly
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 265
operational point of view, it makes no sense to speak of the “real physical
existence” of different contexts, as knowledge of a single context makes
impossible the measurement of all the other ones.
Einstein-Podolski-Rosen (EPR) type arguments 9 utilizing a configu- 9 Albert Einstein, Boris Podolsky, andNathan Rosen. Can quantum-mechanicaldescription of physical reality be con-sidered complete? Physical Review,47(10):777–780, May 1935. D O I :10.1103/PhysRev.47.777. URL http:
//dx.doi.org/10.1103/PhysRev.47.777
ration sketched in Fig. A.4 claim to be able to infer two different contexts
counterfactually. One context is measured on one side of the setup, the
other context on the other side of it. By the uniqueness property 10 of cer-
10 Karl Svozil. Are simultaneous Bellmeasurements possible? New Jour-nal of Physics, 8:39, 1–8, 2006b. D O I :10.1088/1367-2630/8/3/039. URLhttp://dx.doi.org/10.1088/
1367-2630/8/3/039
tain two-particle states, knowledge of a property of one particle entails
the certainty that, if this property were measured on the other particle
as well, the outcome of the measurement would be a unique function of
the outcome of the measurement performed. This makes possible the
measurement of one context, as well as the simultaneous counterfactual
inference of another, mutual exclusive, context. Because, one could argue,
although one has actually measured on one side a different, incompatible
context compared to the context measured on the other side, if on both
sides the same context would be measured, the outcomes on both sides
would be uniquely correlated. Hence measurement of one context per side
is sufficient, for the outcome could be counterfactually inferred on the
other side.
As problematic as counterfactual physical reasoning may appear from
an operational point of view even for a two particle state, the simultaneous
“counterfactual inference” of three or more blocks or contexts fails because
of the missing uniqueness property of quantum states.
A.4 Realizations of two-dimensional beam splitters
In what follows, lossless devices will be considered. The matrix
T(ω,φ) =(
sinω cosω
e−iφ cosω −e−iφ sinω
)(A.1)
introduced in Eq. (A.1) has physical realizations in terms of beam splitters
and Mach-Zehnder interferometers equipped with an appropriate number
of phase shifters. Two such realizations are depicted in Fig. A.1. The
elementary quantum interference device Tbs in Fig. A.1a) is a unit consist-
ing of two phase shifters P1 and P2 in the input ports, followed by a beam
splitter S, which is followed by a phase shifter P3 in one of the output ports.
The device can be quantum mechanically described by 11 11 Daniel M. Greenberger, Mike A. Horne,and Anton Zeilinger. Multiparticle in-terferometry and the superpositionprinciple. Physics Today, 46:22–29, Au-gust 1993. D O I : 10.1063/1.881360. URLhttp://dx.doi.org/10.1063/1.881360
P1 : |0⟩ → |0⟩e i (α+β),
P2 : |1⟩ → |1⟩e iβ,
S : |0⟩ → pT |1′⟩+ i
pR |0′⟩,
S : |1⟩ → pT |0′⟩+ i
pR |1′⟩,
P3 : |0′⟩ → |0′⟩e iϕ,
(A.2)
where every reflection by a beam splitter S contributes a phase π/2 and
thus a factor of e iπ/2 = i to the state evolution. Transmitted beams remain
266 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
P3,ϕ
S(T )
0 0′
1′1
Tbs (ω,α,β,ϕ)
-
-
-
-
@
@@@@@@
@@@@@@@
P4,ϕM
M
S1 S2
0 0′
1′1 c
b
TM Z (α,β,ω,ϕ)
-
-
-
-
P3,ω
a)
b)
HHHH
HHHHHHH
HHH
P1,α+β
P1,α+β
P2,β
P2,β
Figure A.1: A universal quantum interfer-ence device operating on a qubit can berealized by a 4-port interferometer withtwo input ports 0,1 and two output ports0′,1′; a) realization by a single beam split-ter S(T ) with variable transmission T andthree phase shifters P1,P2,P3; b) realiza-tion by two 50:50 beam splitters S1 and S2and four phase shifters P1,P2,P3,P4.
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 267
unchanged; i.e., there are no phase changes. Global phase shifts from
mirror reflections are omitted. Withp
T (ω) = cosω andp
R(ω) = sinω, the
corresponding unitary evolution matrix is given by
Tbs (ω,α,β,ϕ) =(
i e i (α+β+ϕ) sinω e i (β+ϕ) cosω
e i (α+β) cosω i e i β sinω
). (A.3)
Alternatively, the action of a lossless beam splitter may be described by the
matrix 12 12 The standard labelling of the input andoutput ports are interchanged, thereforesine and cosine are exchanged in thetransition matrix.
(ip
R(ω)p
T (ω)pT (ω) i
pR(ω)
)=
(i sinω cosω
cosω i sinω
).
A phase shifter in two-dimensional Hilbert space is represented by either
diag(e iϕ,1
)or diag
(1,e iϕ
). The action of the entire device consisting of
such elements is calculated by multiplying the matrices in reverse order in
which the quanta pass these elements 13; i.e., 13 B. Yurke, S. L. McCall, and J. R. Klauder.SU(2) and SU(1,1) interferometers. Phys-ical Review A, 33:4033–4054, 1986. URLhttp://dx.doi.org/10.1103/PhysRevA.
33.4033; and R. A. Campos, B. E. A. Saleh,and M. C. Teich. Fourth-order interferenceof joint single-photon wave packets inlossless optical systems. Physical Review A,42:4127–4137, 1990. D O I : 10.1103/Phys-RevA.42.4127. URL http://dx.doi.org/
10.1103/PhysRevA.42.4127
Tbs (ω,α,β,ϕ) =(
e iϕ 0
0 1
)(i sinω cosω
cosω i sinω
)(e i (α+β) 0
0 1
)(1 0
0 e iβ
).
(A.4)
The elementary quantum interference device TM Z depicted in Fig. A.1b)
is a Mach-Zehnder interferometer with two input and output ports and
three phase shifters. The process can be quantum mechanically described
byP1 : |0⟩ → |0⟩e i (α+β),
P2 : |1⟩ → |1⟩e iβ,
S1 : |1⟩ → (|b⟩+ i |c⟩)/p
2,
S1 : |0⟩ → (|c⟩+ i |b⟩)/p
2,
P3 : |b⟩ → |b⟩e iω,
S2 : |b⟩ → (|1′⟩+ i |0′⟩)/p
2,
S2 : |c⟩ → (|0′⟩+ i |1′⟩)/p
2,
P4 : |0′⟩ → |0′⟩e iϕ.
(A.5)
The corresponding unitary evolution matrix is given by
TM Z (α,β,ω,ϕ) = i e i (β+ω2 )
(−e i (α+ϕ) sin ω
2 e iϕ cos ω2
e iα cos ω2 sin ω
2
). (A.6)
Alternatively, TM Z can be computed by matrix multiplication; i.e.,
TM Z (α,β,ω,ϕ) = i e i (β+ω2 )
(e iϕ 0
0 1
)1p2
(i 1
1 i
)(e iω 0
0 1
)·
· 1p2
(i 1
1 i
)(e i (α+β) 0
0 1
)(1 0
0 e iβ
).
(A.7)
Both elementary quantum interference devices Tbs and TM Z are uni-
versal in the sense that every unitary quantum evolution operator in
two-dimensional Hilbert space can be brought into a one-to-one cor-
respondence with Tbs and TM Z . As the emphasis is on the realization of
268 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
the elementary beam splitter T in Eq. (A.1), which spans a subset of the
set of all two-dimensional unitary transformations, the comparison of
the parameters in T(ω,φ) = Tbs (ω′,β′,α′,ϕ′) = TM Z (ω′′,β′′,α′′,ϕ′′) yields
ω = ω′ = ω′′/2, β′ = π/2−φ, ϕ′ = φ−π/2, α′ = −π/2, β′′ = π/2−ω−φ,
ϕ′′ =φ−π, α′′ =π, and thus
T(ω,φ) = Tbs (ω,−π2
,π
2−φ,φ− π
2) = TM Z (2ω,π,
π
2−ω−φ,φ−π). (A.8)
Let us examine the realization of a few primitive logical “gates” corre-
sponding to (unitary) unary operations on qubits. The “identity” element
I2 is defined by |0⟩→ |0⟩, |1⟩→ |1⟩ and can be realized by
I2 = T(π
2,π) = Tbs (
π
2,−π
2,−π
2,π
2) = TM Z (π,π,−π,0) = diag(1,1) . (A.9)
The “not” gate is defined by |0⟩→ |1⟩, |1⟩→ |0⟩ and can be realized by
not= T(0,0) = Tbs (0,−π2
,π
2,−π
2) = TM Z (0,π,
π
2,π) =
(0 1
1 0
). (A.10)
The next gate, a modified “pI2,” is a truly quantum mechanical, since it
converts a classical bit into a coherent superposition; i.e., |0⟩ and |1⟩. pI2 is
defined by |0⟩→ (1/p
2)(|0⟩+|1⟩), |1⟩→ (1/p
2)(|0⟩−|1⟩) and can be realized
by
√I2 = T(
π
4,0) = Tbs (
π
4,−π
2,π
2,−π
2) = TM Z (
π
2,π,
π
4,−π) = 1p
2
(1 1
1 −1
).
(A.11)
Note thatpI2 ·
pI2 = I2. However, the reduced parameterization of T(ω,φ) is
insufficient to representpnot, such as
pnot= Tbs (
π
4,−π,
3π
4,−π) = 1
2
(1+ i 1− i
1− i 1+ i
), (A.12)
withpnot
pnot= not.
A.5 Two particle correlations
In what follows, spin state measurements along certain directions or an-
gles in spherical coordinates will be considered. Let us, for the sake of
clarity, first specify and make precise what we mean by “direction of mea-
surement.” Following, e.g., Ref. 14, page 1, Fig. 1, and Fig. A.2, when not 14 G. N. Ramachandran and S. Ramase-shan. Crystal optics. In S. Flügge, editor,Handbuch der Physik XXV/1, volume XXV,pages 1–217. Springer, Berlin, 1961
specified otherwise, we consider a particle travelling along the positive
z-axis; i.e., along 0Z , which is taken to be horizontal. The x-axis along 0X
is also taken to be horizontal. The remaining y-axis is taken vertically along
0Y . The three axes together form a right-handed system of coordinates.
The Cartesian (x, y , z)–coordinates can be translated into spherical
coordinates (r ,θ,ϕ) via x = r sinθcosϕ, y = r sinθ sinϕ, z = r cosθ,
whereby θ is the polar angle in the x–z-plane measured from the z-axis,
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 269
-
6
3]
0 Z
Y
Xθ
ϕ
Figure A.2: Coordinate system for mea-surements of particles travelling along0Z
with 0 ≤ θ ≤ π, and ϕ is the azimuthal angle in the x–y-plane, measured
from the x-axis with 0 ≤ ϕ < 2π. We shall only consider directions taken
from the origin 0, characterized by the angles θ and ϕ, assuming a unit
radius r = 1.
Consider two particles or quanta. On each one of the two quanta, cer-
tain measurements (such as the spin state or polarization) of (dichotomic)
observables O(a) and O(b) along the directions a and b, respectively, are
performed. The individual outcomes are encoded or labeled by the sym-
bols “−” and “+,” or values “-1” and “+1” are recorded along the directions
a for the first particle, and b for the second particle, respectively. (Suppose
that the measurement direction a at “Alice’s location” is unknown to an
observer “Bob” measuring b and vice versa.) A two-particle correlation
function E(a,b) is defined by averaging over the product of the outcomes
O(a)i ,O(b)i ∈ −1,1 in the i th experiment for a total of N experiments; i.e.,
E(a,b) = 1
N
N∑i=1
O(a)i O(b)i . (A.13)
Quantum mechanically, we shall follow a standard procedure for ob-
taining the probabilities upon which the expectation functions are based.
We shall start from the angular momentum operators, as for instance de-
fined in Schiff’s “Quantum Mechanics” 15, Chap. VI, Sec.24 in arbitrary 15 Leonard I. Schiff. Quantum Mechanics.McGraw-Hill, New York, 1955directions, given by the spherical angular momentum co-ordinates θ and
ϕ, as defined above. Then, the projection operators corresponding to the
eigenstates associated with the different eigenvalues are derived from the
dyadic (tensor) product of the normalized eigenvectors. In Hilbert space
based 16 quantum logic 17, every projector corresponds to a proposition 16 John von Neumann. MathematischeGrundlagen der Quantenmechanik.Springer, Berlin, 1932. English translationin Ref.17 Garrett Birkhoff and John von Neumann.The logic of quantum mechanics. Annalsof Mathematics, 37(4):823–843, 1936.D O I : 10.2307/1968621. URL http:
//dx.doi.org/10.2307/1968621
that the system is in a state corresponding to that observable. The quan-
tum probabilities associated with these eigenstates are derived from the
Born rule, assuming singlet states for the physical reasons discussed above.
These probabilities contribute to the correlation and expectation func-
tions.
Two-state particles:
Classical case:
270 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
For the two-outcome (e.g., spin one-half case of photon polarization)
case, it is quite easy to demonstrate that the classical expectation function
in the plane perpendicular to the direction connecting the two particles is
a linear function of the azimuthal measurement angle. Assume uniform
distribution of (opposite but otherwise) identical “angular momenta”
shared by the two particles and lying on the circumference of the unit circle
in the plane spanned by 0X and 0Y , as depicted in Figs. A.2 and A.3.........................
.........................
.........................
.........................
........................
.......................
.......................
........................
..................................................
..............................................................................................................................................
......
........................
.......................
......................
.
........................
.........................
.........................
.........................
........
........
........
........
........
........
.........................
.........................
....................
.....
....................
....
......................
.
.......................
........................
.........................
......................... ......................... ........................ ........................ ..................................................
.........................
........................
.......................
.......................
........................
.........................
.........................
.........................
........................
a
+
−
........................
.........................
.........................
.........................
........................
.......................
...............................................
......................................................................................................................................................................................................
........................
.......................
.......................
........................
.........................
.........................
.........................
.............
...........
.............
...........
.........................
.........................
.........................
........................
.......................
.......................
........................
......................... ......................... ......................... ........................ ........................ ......................... ..................................................
...............................................
.......................
........................
.........................
.........................
.........................
........................
b
−
+
........................
.........................
.........................
.........................
........................
.......................
...............................................
......................................................................................................................................................................................................
........................
.......................
.......................
........................
.........................
.........................
.........................
.............
...........
.............
...........
.........................
.........................
.........................
........................
.......................
.......................
........................
......................... ......................... ......................... ........................ ........................ ......................... ..................................................
...............................................
.......................
........................
.........................
.........................
.........................
........................
θ
θ
θ+
−·+=−
+·−=−
+
a
b
Figure A.3: Planar geometric demonstra-tion of the classical two two-state particlescorrelation.
By considering the length A+(a,b) and A−(a,b) of the positive and
negative contributions to expectation function, one obtains for 0 ≤ θ =|a −b| ≤π,
Ecl,2,2(θ) = Ecl,2,2(a,b) = 12π [A+(a,b)− A−(a,b)]
= 12π [2A+(a,b)−2π] = 2
π |a −b|−1 = 2θπ −1,
(A.14)
where the subscripts stand for the number of mutually exclusive measure-
ment outcomes per particle, and for the number of particles, respectively.
Note that A+(a,b)+ A−(a,b) = 2π.
Quantum case:
The two spin one-half particle case is one of the standard quantum
mechanical exercises, although it is seldomly computed explicitly. For the
sake of completeness and with the prospect to generalize the results to
more particles of higher spin, this case will be enumerated explicitly. In
what follows, we shall use the following notation: Let |+⟩ denote the pure
state corresponding to e1 = (0,1), and |−⟩ denote the orthogonal pure state
corresponding to e2 = (1,0). The superscript “T ,” “∗” and “†” stand for
transposition, complex and Hermitian conjugation, respectively.
In finite-dimensional Hilbert space, the matrix representation of pro-
jectors Ea from normalized vectors a = (a1, a2, . . . , an)T with respect to
some basis of n-dimensional Hilbert space is obtained by taking the dyadic
product; i.e., by
Ea =[
a,a†]= [
a, (a∗)T ]= a⊗a† =
a1a†
a2a†
. . .
an a†
=
a1a∗
1 a1a∗2 . . . a1a∗
n
a2a∗1 a2a∗
2 . . . a2a∗n
. . . . . . . . . . . .
an a∗1 an a∗
2 . . . an a∗n
.
(A.15)
The tensor or Kronecker product of two vectors a and b = (b1,b2, . . . ,bm)T
can be represented by
a⊗b = (a1b, a2b, . . . , an b)T = (a1b1, a1b2, . . . , anbm)T (A.16)
The tensor or Kronecker product of some operators
A=
a11 a12 . . . a1n
a21 a22 . . . a2n
. . . . . . . . . . . .
an1 an2 . . . ann
and B=
b11 b12 . . . b1m
b21 b22 . . . b2m
. . . . . . . . . . . .
bm1 bm2 . . . bmm
(A.17)
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 271
is represented by an nm ×nm-matrix
A⊗B=
a11B a12B . . . a1nB
a21B a22B . . . a2nB
. . . . . . . . . . . .
an1B an2B . . . annB
=
a11b11 a11b12 . . . a1nb1m
a11b21 a11b22 . . . a2nb2m
. . . . . . . . . . . .
annbm1 annbm2 . . . annbmm
.
(A.18)
Observables:
Let us start with the spin one-half angular momentum observables of a
single particle along an arbitrary direction in spherical co-ordinates θ and
ϕ in units of —h 18; i.e., 18 Leonard I. Schiff. Quantum Mechanics.McGraw-Hill, New York, 1955
Mx = 1
2
(0 1
1 0
), My = 1
2
(0 −i
i 0
), Mz = 1
2
(1 0
0 −1
). (A.19)
The angular momentum operator in arbitrary direction θ, ϕ is given by its
spectral decomposition
S 12
(θ,ϕ) = xMx + yMy + zMz =Mx sinθcosϕ+My sinθ sinϕ+Mz cosθ
= 12σ(θ,ϕ) = 1
2
(cosθ e−iϕ sinθ
e iϕ sinθ −cosθ
)
= − 12
(sin2 θ
2 − 12 e−iϕ sinθ
− 12 e iϕ sinθ cos2 θ
2
)+ 1
2
(cos2 θ
212 e−iϕ sinθ
12 e iϕ sinθ sin2 θ
2
)= − 1
2
12
[I2 −σ(θ,ϕ)
]+ 12
12
[I2 +σ(θ,ϕ)
].
(A.20)
The orthonormal eigenstates (eigenvectors) associated with the eigen-
values − 12 and + 1
2 of S 12
(θ,ϕ) in Eq. (A.20) are
|−⟩θ,ϕ ≡ x− 12
(θ,ϕ) = e iδ+(−e−
iϕ2 sin θ
2 ,eiϕ2 cos θ
2
),
|+⟩θ,ϕ ≡ x+ 12
(θ,ϕ) = e iδ−(e−
iϕ2 cos θ
2 ,eiϕ2 sin θ
2
),
(A.21)
respectively. δ+ and δ− are arbitrary phases. These orthogonal unit vectors
correspond to the two orthogonal projectors
F∓(θ,ϕ) = 1
2
[I2 ∓σ(θ,ϕ)
](A.22)
for the spin down and up states along θ and ϕ, respectively. By setting all
the phases and angles to zero, one obtains the original orthonormalized
basis |−⟩, |+⟩.
In what follows, we shall consider two-partite correlation operators
based on the spin observables discussed above.
1. Two-partite angular momentum observable
If we are only interested in spin state measurements with the associated
outcomes of spin states in units of —h, Eq. (A.24) can be rewritten to
include all possible cases at once; i.e.,
S 12
12
(θ,ϕ) =S 12
(θ1,ϕ1)⊗S 12
(θ2,ϕ2). (A.23)
272 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
2. General two-partite observables
The two-particle projectors F±± or, by another notation, F±1±2 to indi-
cate the outcome on the first or the second particle, corresponding to
a two spin- 12 particle joint measurement aligned (“+”) or antialigned
(“−”) along arbitrary directions are
F±1±2 (θ,ϕ) = 1
2
[I2 ±1σ(θ1,ϕ1)
]⊗ 1
2
[I2 ±2σ(θ2,ϕ2)
]; (A.24)
where “±i ,” i = 1,2 refers to the outcome on the i ’th particle, and the
notation θ,ϕ is used to indicate all angular parameters.
To demonstrate its physical interpretation, let us consider as a con-
crete example a spin state measurement on two quanta as depicted in
Fig. A.4: F−+(θ,ϕ) stands for the proposition
‘The spin state of the first particle measured along θ1,ϕ1 is “−” and the spin
state of the second particle measured along θ2,ϕ2 is “+” .’
ZZ
−+
θ1,ϕ1 −+
θ2,ϕ2
-
Figure A.4: Simultaneous spin statemeasurement of the two-partite staterepresented in Eq. (A.27). Boxes indicatespin state analyzers such as Stern-Gerlachapparatus oriented along the directionsθ1,ϕ1 and θ2,ϕ2; their two output portsare occupied with detectors associatedwith the outcomes “+” and “−”, respec-tively.More generally, we will consider two different numbers λ+ and λ−, and
the generalized single-particle operator
R 12
(θ,ϕ) =λ−
1
2
[I2 −σ(θ,ϕ)
]+λ+
1
2
[I2 +σ(θ,ϕ)
], (A.25)
as well as the resulting two-particle operator
R 12
12
(θ,ϕ) =R 12
(θ1,ϕ1)⊗R 12
(θ2,ϕ2)
=λ−λ−F−−+λ−λ+F−++λ+λ−F+−+λ+λ+F++.(A.26)
Singlet state:
Singlet states |Ψd ,n,i ⟩ could be labeled by three numbers d , n and i ,
denoting the number d of outcomes associated with the dimension of
Hilbert space per particle, the number n of participating particles, and the
state count i in an enumeration of all possible singlet states of n particles
of spin j = (d − 1)/2, respectively 19. For n = 2, there is only one singlet 19 Maria Schimpf and Karl Svozil. Aglance at singlet states and four-partitecorrelations. Mathematica Slovaca,60:701–722, 2010. ISSN 0139-9918.D O I : 10.2478/s12175-010-0041-7.URL http://dx.doi.org/10.2478/
s12175-010-0041-7
state, and i = 1 is always one. For historic reasons, this singlet state is also
called Bell state and denoted by |Ψ−⟩.Consider the singlet “Bell” state of two spin- 1
2 particles
|Ψ−⟩ = 1p2
(|+−⟩−|−+⟩). (A.27)
H I L B E RT S PAC E QUA N T U M M E C H A N I C S A N D QUA N T U M L O G I C 273
With the identifications |+⟩ ≡ e1 = (1,0) and |−⟩ ≡ e2 = (0,1) as before,
the Bell state has a vector representation as
|Ψ−⟩ ≡ 1p2
(e1 ⊗e2 −e2 ⊗e1)
= 1p2
[(1,0)⊗ (0,1)− (0,1)⊗ (1,0)] =(0, 1p
2,− 1p
2,0
).
(A.28)
The density operator ρΨ− is just the projector of the dyadic product of this
vector, corresponding to the one-dimensional linear subspace spanned by
|Ψ−⟩; i.e.,
ρΨ− = |Ψ−⟩⟨Ψ−| =[|Ψ−⟩, |Ψ−⟩†
]= 1
2
0 0 0 0
0 1 −1 0
0 −1 1 0
0 0 0 0
. (A.29)
Singlet states are form invariant with respect to arbitrary unitary trans-
formations in the single-particle Hilbert spaces and thus also rotation-
ally invariant in configuration space, in particular under the rotations
|+⟩ = e i ϕ2(cos θ
2 |+′⟩− sin θ2 |−′⟩
)and |−⟩ = e−i ϕ2
(sin θ
2 |+′⟩+cos θ2 |−′⟩
)in the
spherical coordinates θ,ϕ defined earlier [e. g., Ref. 20, Eq. (2), or Ref. 21, 20 Günther Krenn and Anton Zeilinger.Entangled entanglement. Physical ReviewA, 54:1793–1797, 1996. D O I : 10.1103/Phys-RevA.54.1793. URL http://dx.doi.org/
10.1103/PhysRevA.54.179321 L. E. Ballentine. Quantum Mechanics.Prentice Hall, Englewood Cliffs, NJ, 1989
Eq. (7–49)].
The Bell singlet state is unique in the sense that the outcome of a spin
state measurement along a particular direction on one particle “fixes” also
the opposite outcome of a spin state measurement along the same direc-
tion on its “partner” particle: (assuming lossless devices) whenever a “plus”
or a “minus” is recorded on one side, a “minus” or a “plus” is recorded on
the other side, and vice versa.
Results:
We now turn to the calculation of quantum predictions. The joint prob-
ability to register the spins of the two particles in state ρΨ− aligned or
antialigned along the directions defined by (θ1, ϕ1) and (θ2, ϕ2) can be
evaluated by a straightforward calculation of
PΨ−±1±2 (θ,ϕ) = Tr[ρΨ− ·F±1±2
(θ,ϕ
)]= 1
4
1− (±11)(±21)
[cosθ1 cosθ2 + sinθ1 sinθ2 cos(ϕ1 −ϕ2)
].
(A.30)
Again, “±i ,” i = 1,2 refers to the outcome on the i ’th particle.
Since P=+P 6= = 1 and E = P=−P 6=, the joint probabilities to find the two
particles in an even or in an odd number of spin-“− 12 ”-states when mea-
sured along (θ1, ϕ1) and (θ2, ϕ2) are in terms of the expectation function
given by
P= = P+++P−− = 12 (1+E)
= 12
1− [
cosθ1 cosθ2 − sinθ1 sinθ2 cos(ϕ1 −ϕ2)]
,
P 6= = P+−+P−+ = 12 (1−E)
= 12
1+ [
cosθ1 cosθ2 + sinθ1 sinθ2 cos(ϕ1 −ϕ2)]
.
(A.31)
274 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Finally, the quantum mechanical expectation function is obtained by the
difference P=−P 6=; i.e.,
EΨ−−1,+1(θ1,θ2,ϕ1,ϕ2) =−[cosθ1 cosθ2 +cos(ϕ1 −ϕ2)sinθ1 sinθ2
].
(A.32)
By setting either the azimuthal angle differences equal to zero, or by as-
suming measurements in the plane perpendicular to the direction of parti-
cle propagation, i.e., with θ1 = θ2 = π2 , one obtains
EΨ−−1,+1(θ1,θ2) = −cos(θ1 −θ2),
EΨ−−1,+1(π2 , π2 ,ϕ1,ϕ2) = −cos(ϕ1 −ϕ2).(A.33)
The general computation of the quantum expectation function for
operator (A.26) yields
EΨ−λ1λ2 (θ,ϕ) = Tr[ρΨ− ·R 1
212
(θ,ϕ
)]== 1
4
(λ−+λ+)2 − (λ−−λ+)2
[cosθ1 cosθ2 +cos(ϕ1 −ϕ2)sinθ1 sinθ2
].
(A.34)
The standard two-particle quantum mechanical expectations (A.32) based
on the dichotomic outcomes “−1” and “+1” are obtained by setting λ+ =−λ− = 1.
A more “natural” choice of λ± would be in terms of the spin state ob-
servables (A.23) in units of —h; i.e., λ+ =−λ− = 12 . The expectation function
of these observables can be directly calculated via S 12
; i.e.,
EΨ−− 12 ,+ 1
2(θ,ϕ) = Tr
ρΨ− ·
[S 1
2(θ1,ϕ1)⊗S 1
2(θ2,ϕ2)
]= 1
4
[cosθ1 cosθ2 +cos(ϕ1 −ϕ2)sinθ1 sinθ2
]= 14 EΨ−−1,+1(θ,ϕ).
(A.35)
Bibliography
Oliver Aberth. Computable Analysis. McGraw-Hill, New York, 1980.
Milton Abramowitz and Irene A. Stegun, editors. Handbook of Math-
ematical Functions with Formulas, Graphs, and Mathematical Tables.
Number 55 in National Bureau of Standards Applied Mathematics Series.
U.S. Government Printing Office, Washington, D.C., 1964. Corrections
appeared in later printings up to the 10th Printing, December, 1972. Re-
productions by other publishers, in whole or in part, have been available
since 1965.
Lars V. Ahlfors. Complex Analysis: An Introduction of the Theory of An-
alytic Functions of One Complex Variable. McGraw-Hill Book Co., New
York, third edition, 1978.
Martin Aigner and Günter M. Ziegler. Proofs from THE BOOK. Springer,
Heidelberg, four edition, 1998-2010. ISBN 978-3-642-00855-9. URL
http://www.springerlink.com/content/978-3-642-00856-6.
M. A. Al-Gwaiz. Sturm-Liouville Theory and its Applications. Springer,
London, 2008.
A. D. Alexandrov. On Lorentz transformations. Uspehi Mat. Nauk., 5(3):
187, 1950.
A. D. Alexandrov. A contribution to chronogeometry. Canadian Journal of
Math., 19:1119–1128, 1967.
A. D. Alexandrov. Mappings of spaces with families of cones and space-
time transformations. Annali di Matematica Pura ed Applicata, 103:
229–257, 1975. ISSN 0373-3114. D O I : 10.1007/BF02414157. URL http:
//dx.doi.org/10.1007/BF02414157.
A. D. Alexandrov. On the principles of relativity theory. In Classics of Soviet
Mathematics. Volume 4. A. D. Alexandrov. Selected Works, pages 289–318.
1996.
Philip W. Anderson. More is different. Science, 177(4047):393–396, August
1972. D O I : 10.1126/science.177.4047.393. URL http://dx.doi.org/10.
1126/science.177.4047.393.
276 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
George E. Andrews, Richard Askey, and Ranjan Roy. Special Functions,
volume 71 of Encyclopedia of Mathematics and its Applications. Cam-
bridge University Press, Cambridge, 1999. ISBN 0-521-62321-9.
Tom M. Apostol. Mathematical Analysis: A Modern Approach to Advanced
Calculus. Addison-Wesley Series in Mathematics. Addison-Wesley, Read-
ing, MA, second edition, 1974. ISBN 0-201-00288-4.
Thomas Aquinas. Summa Theologica. Translated by Fathers of the English
Dominican Province. Christian Classics Ethereal Library, Grand Rapids,
MI, 1981. URL http://www.ccel.org/ccel/aquinas/summa.html.
George B. Arfken and Hans J. Weber. Mathematical Methods for Physicists.
Elsevier, Oxford, 6th edition, 2005. ISBN 0-12-059876-0;0-12-088584-0.
Sheldon Axler, Paul Bourdon, and Wade Ramey. Harmonic Function
Theory, volume 137 of Graduate texts in mathematics. second edition,
1994. ISBN 0-387-97875-5.
M. Baaz. Über den allgemeinen Gehalt von Beweisen. In Contributions
to General Algebra, volume 6, pages 21–29, Vienna, 1988. Hölder-Pichler-
Tempsky.
L. E. Ballentine. Quantum Mechanics. Prentice Hall, Englewood Cliffs, NJ,
1989.
Asim O. Barut. e = —hω. Physics Letters A, 143(8):349–352, 1990. ISSN
0375-9601. D O I : 10.1016/0375-9601(90)90369-Y. URL http://dx.doi.
org/10.1016/0375-9601(90)90369-Y.
John S. Bell. Against ‘measurement’. Physics World, 3:33–41, 1990. URL
http://physicsworldarchive.iop.org/summary/pwa-xml/3/8/
phwv3i8a26.
W. W. Bell. Special Functions for Scientists and Engineers. D. Van Nostrand
Company Ltd, London, 1968.
Paul Benacerraf. Tasks and supertasks, and the modern Eleatics. Journal
of Philosophy, LIX(24):765–784, 1962. URL http://www.jstor.org/
stable/2023500.
Walter Benz. Geometrische Transformationen. BI Wissenschaftsverlag,
Mannheim, 1992.
Michael Berry. Asymptotics, superasymptotics, hyperasymptotics... In
Harvey Segur, Saleh Tanveer, and Herbert Levine, editors, Asymptotics
beyond All Orders, volume 284 of NATO ASI Series, pages 1–14. Springer,
1992. ISBN 978-1-4757-0437-2. D O I : 10.1007/978-1-4757-0435-8. URL
http://dx.doi.org/10.1007/978-1-4757-0435-8.
B I B L I O G R A P H Y 277
Garrett Birkhoff and Gian-Carlo Rota. Ordinary Differential Equations.
John Wiley & Sons, New York, Chichester, Brisbane, Toronto, fourth edi-
tion, 1959, 1960, 1962, 1969, 1978, and 1989.
Garrett Birkhoff and John von Neumann. The logic of quantum mechan-
ics. Annals of Mathematics, 37(4):823–843, 1936. D O I : 10.2307/1968621.
URL http://dx.doi.org/10.2307/1968621.
E. Bishop and Douglas S. Bridges. Constructive Analysis. Springer, Berlin,
1985.
R. M. Blake. The paradox of temporal process. Journal of Philosophy, 23
(24):645–654, 1926. URL http://www.jstor.org/stable/2013813.
H. J. Borchers and G. C. Hegerfeldt. The structure of space-time transfor-
mations. Communications in Mathematical Physics, 28(3):259–266, 1972.
URL http://projecteuclid.org/euclid.cmp/1103858408.
Émile Borel. Mémoire sur les séries divergentes. Annales scientifiques
de l’École Normale Supérieure, 16:9–131, 1899. URL http://eudml.org/
doc/81143.
Max Born. Zur Quantenmechanik der Stoßvorgänge. Zeitschrift für
Physik, 37:863–867, 1926a. D O I : 10.1007/BF01397477. URL http://dx.
doi.org/10.1007/BF01397477.
Max Born. Quantenmechanik der Stoßvorgänge. Zeitschrift für Physik, 38:
803–827, 1926b. D O I : 10.1007/BF01397184. URL http://dx.doi.org/
10.1007/BF01397184.
John P. Boyd. The devil’s invention: Asymptotic, superasymptotic and
hyperasymptotic series. Acta Applicandae Mathematica, 56:1–98, 1999.
ISSN 0167-8019. D O I : 10.1023/A:1006145903624. URL http://dx.doi.
org/10.1023/A:1006145903624.
Vasco Brattka, Peter Hertling, and Klaus Weihrauch. A tutorial on com-
putable analysis. In S. Barry Cooper, Benedikt Löwe, and Andrea Sorbi,
editors, New Computational Paradigms: Changing Conceptions of What is
Computable, pages 425–491. Springer, New York, 2008.
Douglas Bridges and F. Richman. Varieties of Constructive Mathematics.
Cambridge University Press, Cambridge, 1987.
Percy W. Bridgman. A physicist’s second reaction to Mengenlehre. Scripta
Mathematica, 2:101–117, 224–234, 1934.
Yuri Alexandrovich Brychkov and Anatolii Platonovich Prudnikov. Hand-
book of special functions: derivatives, integrals, series and other formulas.
CRC/Chapman & Hall Press, Boca Raton, London, New York, 2008.
278 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
B.L. Burrows and D.J. Colwell. The Fourier transform of the unit step
function. International Journal of Mathematical Education in Science and
Technology, 21(4):629–635, 1990. D O I : 10.1080/0020739900210418. URL
http://dx.doi.org/10.1080/0020739900210418.
Adán Cabello. Kochen-Specker theorem and experimental test on hidden
variables. International Journal of Modern Physics, A 15(18):2813–2820,
2000. D O I : 10.1142/S0217751X00002020. URL http://dx.doi.org/10.
1142/S0217751X00002020.
Adán Cabello, José M. Estebaranz, and G. García-Alcaine. Bell-Kochen-
Specker theorem: A proof with 18 vectors. Physics Letters A, 212(4):183–
187, 1996. D O I : 10.1016/0375-9601(96)00134-X. URL http://dx.doi.
org/10.1016/0375-9601(96)00134-X.
R. A. Campos, B. E. A. Saleh, and M. C. Teich. Fourth-order interference
of joint single-photon wave packets in lossless optical systems. Physical
Review A, 42:4127–4137, 1990. D O I : 10.1103/PhysRevA.42.4127. URL
http://dx.doi.org/10.1103/PhysRevA.42.4127.
Georg Cantor. Beiträge zur Begründung der transfiniten Mengen-
lehre. Mathematische Annalen, 46(4):481–512, November 1895. D O I :
10.1007/BF02124929. URL http://dx.doi.org/10.1007/BF02124929.
J. B. Conway. Functions of Complex Variables. Volume I. Springer, New
York, NY, 1973.
A. S. Davydov. Quantum Mechanics. Addison-Wesley, Reading, MA, 1965.
Rene Descartes. Discours de la méthode pour bien conduire sa raison et
chercher la verité dans les sciences (Discourse on the Method of Rightly
Conducting One’s Reason and of Seeking Truth). 1637. URL http://www.
gutenberg.org/etext/59.
Rene Descartes. The Philosophical Writings of Descartes. Volume 1. Cam-
bridge University Press, Cambridge, 1985. translated by John Cottingham,
Robert Stoothoff and Dugald Murdoch.
Hermann Diels. Die Fragmente der Vorsokratiker, griechisch und deutsch.
Weidmannsche Buchhandlung, Berlin, 1906. URL http://www.archive.
org/details/diefragmentederv01dieluoft.
Paul A. M. Dirac. The Principles of Quantum Mechanics. Oxford University
Press, Oxford, 1930.
Hans Jörg Dirschmid. Tensoren und Felder. Springer, Vienna, 1996.
S. Drobot. Real Numbers. Prentice-Hall, Englewood Cliffs, New Jersey,
1964.
B I B L I O G R A P H Y 279
Dean G. Duffy. Green’s Functions with Applications. Chapman and
Hall/CRC, Boca Raton, 2001.
Thomas Durt, Berthold-Georg Englert, Ingemar Bengtsson, and Karol Zy-
czkowski. On mutually unbiased bases. International Journal of Quantum
Information, 8:535–640, 2010. D O I : 10.1142/S0219749910006502. URL
http://dx.doi.org/10.1142/S0219749910006502.
Anatolij Dvurecenskij. Gleason’s Theorem and Its Applications. Kluwer
Academic Publishers, Dordrecht, 1993.
Freeman J. Dyson. Divergence of perturbation theory in quantum elec-
trodynamics. Phys. Rev., 85(4):631–632, Feb 1952. D O I : 10.1103/Phys-
Rev.85.631. URL http://dx.doi.org/10.1103/PhysRev.85.631.
Albert Einstein, Boris Podolsky, and Nathan Rosen. Can quantum-
mechanical description of physical reality be considered complete?
Physical Review, 47(10):777–780, May 1935. D O I : 10.1103/PhysRev.47.777.
URL http://dx.doi.org/10.1103/PhysRev.47.777.
Artur Ekert and Peter L. Knight. Entangled quantum systems and the
Schmidt decomposition. American Journal of Physics, 63(5):415–423,
1995. D O I : 10.1119/1.17904. URL http://dx.doi.org/10.1119/1.
17904.
Lawrence C. Evans. Partial differential equations. Graduate Studies in
Mathematics, volume 19. American Mathematical Society, Providence,
Rhode Island, 1998.
Graham Everest, Alf van der Poorten, Igor Shparlinski, and Thomas Ward.
Recurrence sequences. Volume 104 in the AMS Surveys and Monographs
series. American mathematical Society, Providence, RI, 2003.
Hugh Everett III. The Everett interpretation of quantum mechanics:
Collected works 1955-1980 with commentary. Princeton University
Press, Princeton, NJ, 2012. ISBN 9780691145075. URL http://press.
princeton.edu/titles/9770.html.
William Norrie Everitt. A catalogue of Sturm-Liouville differential equa-
tions. In Werner O. Amrein, Andreas M. Hinz, and David B. Pearson, edi-
tors, Sturm-Liouville Theory, Past and Present, pages 271–331. Birkhäuser
Verlag, Basel, 2005. URL http://www.math.niu.edu/SL2/papers/
birk0.pdf.
Richard Phillips Feynman. The Feynman lectures on computation.
Addison-Wesley Publishing Company, Reading, MA, 1996. edited by
A.J.G. Hey and R. W. Allen.
280 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Richard Phillips Feynman, Robert B. Leighton, and Matthew Sands. The
Feynman Lectures on Physics. Quantum Mechanics, volume III. Addison-
Wesley, Reading, MA, 1965.
Edward Fredkin. Digital mechanics. an informational process based on
reversible universal cellular automata. Physica, D45:254–270, 1990. D O I :
10.1016/0167-2789(90)90186-S. URL http://dx.doi.org/10.1016/
0167-2789(90)90186-S.
Eberhard Freitag and Rolf Busam. Funktionentheorie 1. Springer, Berlin,
Heidelberg, fourth edition, 1993,1995,2000,2006. English translation in 22. 22
Robert French. The Banach-Tarski theorem. The Mathematical Intelli-
gencer, 10:21–28, 1988. ISSN 0343-6993. D O I : 10.1007/BF03023740. URL
http://dx.doi.org/10.1007/BF03023740.
Sigmund Freud. Ratschläge für den Arzt bei der psychoanalytischen Be-
handlung. In Anna Freud, E. Bibring, W. Hoffer, E. Kris, and O. Isakower,
editors, Gesammelte Werke. Chronologisch geordnet. Achter Band. Werke
aus den Jahren 1909–1913, pages 376–387, Frankfurt am Main, 1999.
Fischer.
Theodore W. Gamelin. Complex Analysis. Springer, New York, NY, 2001.
Robin O. Gandy. Church’s thesis and principles for mechanics. In J. Bar-
wise, H. J. Kreisler, and K. Kunen, editors, The Kleene Symposium. Vol. 101
of Studies in Logic and Foundations of Mathematics, pages 123–148. North
Holland, Amsterdam, 1980.
I. M. Gel’fand and G. E. Shilov. Generalized Functions. Vol. 1: Properties
and Operations. Academic Press, New York, 1964. Translated from the
Russian by Eugene Saletan.
Andrew M. Gleason. Measures on the closed subspaces of a Hilbert
space. Journal of Mathematics and Mechanics (now Indiana Univer-
sity Mathematics Journal), 6(4):885–893, 1957. ISSN 0022-2518. D O I :
10.1512/iumj.1957.6.56050". URL http://dx.doi.org/10.1512/iumj.
1957.6.56050.
Kurt Gödel. Über formal unentscheidbare Sätze der Principia Math-
ematica und verwandter Systeme. Monatshefte für Mathematik und
Physik, 38(1):173–198, 1931. D O I : 10.1007/s00605-006-0423-7. URL
http://dx.doi.org/10.1007/s00605-006-0423-7.
I. S. Gradshteyn and I. M. Ryzhik. Tables of Integrals, Series, and Products,
6th ed. Academic Press, San Diego, CA, 2000.
J. R. Greechie. Orthomodular lattices admitting no states. Journal of Com-
binatorial Theory, 10:119–132, 1971. D O I : 10.1016/0097-3165(71)90015-X.
URL http://dx.doi.org/10.1016/0097-3165(71)90015-X.
B I B L I O G R A P H Y 281
Daniel M. Greenberger, Mike A. Horne, and Anton Zeilinger. Multiparticle
interferometry and the superposition principle. Physics Today, 46:22–29,
August 1993. D O I : 10.1063/1.881360. URL http://dx.doi.org/10.
1063/1.881360.
Robert E. Greene and Stephen G. Krantz. Function theory of one complex
variable, volume 40 of Graduate Studies in Mathematics. American
mathematical Society, Providence, Rhode Island, third edition, 2006.
Werner Greub. Linear Algebra, volume 23 of Graduate Texts in Mathemat-
ics. Springer, New York, Heidelberg, fourth edition, 1975.
A. Grünbaum. Modern Science and Zeno’s paradoxes. Allen and Unwin,
London, second edition, 1968.
Paul R.. Halmos. Finite-dimensional Vector Spaces. Springer, New York,
Heidelberg, Berlin, 1974.
Jan Hamhalter. Quantum Measure Theory. Fundamental Theories of
Physics, Vol. 134. Kluwer Academic Publishers, Dordrecht, Boston, Lon-
don, 2003. ISBN 1-4020-1714-6.
Godfrey Harold Hardy. Divergent Series. Oxford University Press, 1949.
Hans Havlicek. Lineare Algebra für Technische Mathematiker. Helder-
mann Verlag, Lemgo, second edition, 2008.
Oliver Heaviside. Electromagnetic theory. “The Electrician” Printing and
Publishing Corporation, London, 1894-1912. URL http://archive.org/
details/electromagnetict02heavrich.
Jim Hefferon. Linear algebra. 320-375, 2011. URL http://joshua.
smcvt.edu/linalg.html/book.pdf.
Russell Herman. A Second Course in Ordinary Differential Equations:
Dynamical Systems and Boundary Value Problems. University of North
Carolina Wilmington, Wilmington, NC, 2008. URL http://people.
uncw.edu/hermanr/mat463/ODEBook/Book/ODE_LargeFont.pdf.
Creative Commons Attribution-NoncommercialShare Alike 3.0 United
States License.
Russell Herman. Introduction to Fourier and Complex Analysis with Ap-
plications to the Spectral Analysis of Signals. University of North Carolina
Wilmington, Wilmington, NC, 2010. URL http://people.uncw.edu/
hermanr/mat367/FCABook/Book2010/FTCA-book.pdf. Creative Com-
mons Attribution-NoncommercialShare Alike 3.0 United States License.
David Hilbert. Mathematical problems. Bulletin of the American
Mathematical Society, 8(10):437–479, 1902. D O I : 10.1090/S0002-
9904-1902-00923-3. URL http://dx.doi.org/10.1090/
S0002-9904-1902-00923-3.
282 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
David Hilbert. Über das Unendliche. Mathematische Annalen, 95(1):
161–190, 1926. D O I : 10.1007/BF01206605. URL http://dx.doi.org/10.
1007/BF01206605.
Einar Hille. Analytic Function Theory. Ginn, New York, 1962. 2 Volumes.
Einar Hille. Lectures on ordinary differential equations. Addison-Wesley,
Reading, Mass., 1969.
Edmund Hlawka. Zum Zahlbegriff. Philosophia Naturalis, 19:413–470,
1982.
Howard Homes and Chris Rorres. Elementary Linear Algebra: Applica-
tions Version. Wiley, New York, tenth edition, 2010.
Kenneth B. Howell. Principles of Fourier analysis. Chapman & Hall/CRC,
Boca Raton, London, New York, Washington, D.C., 2001.
Klaus Jänich. Analysis für Physiker und Ingenieure. Funktionentheo-
rie, Differentialgleichungen, Spezielle Funktionen. Springer, Berlin,
Heidelberg, fourth edition, 2001. URL http://www.springer.com/
mathematics/analysis/book/978-3-540-41985-3.
Klaus Jänich. Funktionentheorie. Eine Einführung. Springer, Berlin,
Heidelberg, sixth edition, 2008. D O I : 10.1007/978-3-540-35015-6. URL
10.1007/978-3-540-35015-6.
Ulrich D. Jentschura. Resummation of nonalternating divergent per-
turbative expansions. Physical Review D, 62:076001, Aug 2000. D O I :
10.1103/PhysRevD.62.076001. URL http://dx.doi.org/10.1103/
PhysRevD.62.076001.
Satish D. Joglekar. Mathematical Physics: The Basics. CRC Press, Boca
Raton, Florida, 2007.
Vladimir Kisil. Special functions and their symmetries. Part II: Algebraic
and symmetry methods. Postgraduate Course in Applied Analysis, May
2003. URL http://www1.maths.leeds.ac.uk/~kisilv/courses/
sp-repr.pdf.
Hagen Kleinert and Verena Schulte-Frohlinde. Critical Properties of
φ4-Theories. World scientific, Singapore, 2001. ISBN 9810246595.
Morris Kline. Euler and infinite series. Mathematics Magazine, 56(5):
307–314, 1983. ISSN 0025570X. D O I : 10.2307/2690371. URL http:
//dx.doi.org/10.2307/2690371.
Ebergard Klingbeil. Tensorrechnung für Ingenieure. Bibliographisches
Institut, Mannheim, 1966.
Simon Kochen and Ernst P. Specker. The problem of hidden variables
in quantum mechanics. Journal of Mathematics and Mechanics (now
Indiana University Mathematics Journal), 17(1):59–87, 1967. ISSN 0022-
2518. D O I : 10.1512/iumj.1968.17.17004. URL http://dx.doi.org/10.
1512/iumj.1968.17.17004.
B I B L I O G R A P H Y 283
T. W. Körner. Fourier Analysis. Cambridge University Press, Cambridge,
UK, 1988.
Georg Kreisel. A notion of mechanistic theory. Synthese, 29:11–26,
1974. D O I : 10.1007/BF00484949. URL http://dx.doi.org/10.1007/
BF00484949.
Günther Krenn and Anton Zeilinger. Entangled entanglement. Physical
Review A, 54:1793–1797, 1996. D O I : 10.1103/PhysRevA.54.1793. URL
http://dx.doi.org/10.1103/PhysRevA.54.1793.
Gerhard Kristensson. Equations of Fuchsian type. In Second Order
Differential Equations, pages 29–42. Springer, New York, 2010. ISBN 978-
1-4419-7019-0. D O I : 10.1007/978-1-4419-7020-6. URL http://dx.doi.
org/10.1007/978-1-4419-7020-6.
Dietrich Küchemann. The Aerodynamic Design of Aircraft. Pergamon
Press, Oxford, 1978.
Vadim Kuznetsov. Special functions and their symmetries. Part I: Alge-
braic and analytic methods. Postgraduate Course in Applied Analysis,
May 2003. URL http://www1.maths.leeds.ac.uk/~kisilv/courses/
sp-funct.pdf.
Imre Lakatos. Philosophical Papers. 1. The Methodology of Scientific
Research Programmes. Cambridge University Press, Cambridge, 1978.
Rolf Landauer. Information is physical. Physics Today, 44(5):23–29, May
1991. D O I : 10.1063/1.881299. URL http://dx.doi.org/10.1063/1.
881299.
Ron Larson and Bruce H. Edwards. Calculus. Brooks/Cole Cengage
Learning, Belmont, CA, 9th edition, 2010. ISBN 978-0-547-16702-2.
N. N. Lebedev. Special Functions and Their Applications. Prentice-Hall
Inc., Englewood Cliffs, N.J., 1965. R. A. Silverman, translator and editor;
reprinted by Dover, New York, 1972.
H. D. P. Lee. Zeno of Elea. Cambridge University Press, Cambridge, 1936.
Gottfried Wilhelm Leibniz. Letters LXX, LXXI. In Carl Immanuel Gerhardt,
editor, Briefwechsel zwischen Leibniz und Christian Wolf. Handschriften
der Königlichen Bibliothek zu Hannover,. H. W. Schmidt, Halle, 1860. URL
http://books.google.de/books?id=TUkJAAAAQAAJ.
June A. Lester. Distance preserving transformations. In Francis Bueken-
hout, editor, Handbook of Incidence Geometry, pages 921–944. Elsevier,
Amsterdam, 1995.
M. J. Lighthill. Introduction to Fourier Analysis and Generalized Functions.
Cambridge University Press, Cambridge, 1958.
Ismo V. Lindell. Delta function expansions, complex delta functions and
the steepest descent method. American Journal of Physics, 61(5):438–442,
284 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
1993. D O I : 10.1119/1.17238. URL http://dx.doi.org/10.1119/1.
17238.
Seymour Lipschutz and Marc Lipson. Linear algebra. Schaum’s outline
series. McGraw-Hill, fourth edition, 2009.
George Mackiw. A note on the equality of the column and row rank of a
matrix. Mathematics Magazine, 68(4):pp. 285–286, 1995. ISSN 0025570X.
URL http://www.jstor.org/stable/2690576.
T. M. MacRobert. Spherical Harmonics. An Elementary Treatise on Har-
monic Functions with Applications, volume 98 of International Series of
Monographs in Pure and Applied Mathematics. Pergamon Press, Oxford,
3rd edition, 1967.
Eli Maor. Trigonometric Delights. Princeton University Press, Princeton,
1998. URL http://press.princeton.edu/books/maor/.
Francisco Marcellán and Walter Van Assche. Orthogonal Polynomials and
Special Functions, volume 1883 of Lecture Notes in Mathematics. Springer,
Berlin, 2006. ISBN 3-540-31062-2.
David N. Mermin. Lecture notes on quantum computation. 2002-2008.
URL http://people.ccmr.cornell.edu/~mermin/qcomp/CS483.html.
David N. Mermin. Quantum Computer Science. Cambridge University
Press, Cambridge, 2007. ISBN 9780521876582. URL http://people.
ccmr.cornell.edu/~mermin/qcomp/CS483.html.
A. Messiah. Quantum Mechanics, volume I. North-Holland, Amsterdam,
1962.
Charles N. Moore. Summable Series and Convergence Factors. American
Mathematical Society, New York, NY, 1938.
Walter Moore. Schrödinger life and thought. Cambridge University Press,
Cambridge, UK, 1989.
F. D. Murnaghan. The Unitary and Rotation Groups. Spartan Books,
Washington, D.C., 1962.
Otto Neugebauer. Vorlesungen über die Geschichte der antiken mathema-
tischen Wissenschaften. 1. Band: Vorgriechische Mathematik. Springer,
Berlin, 1934. page 172.
M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum
Information. Cambridge University Press, Cambridge, 2000.
Carl M. Bender Steven A. Orszag. Andvanced Mathematical Methods for
Scientists and Enineers. McGraw-Hill, New York, NY, 1978.
Asher Peres. Quantum Theory: Concepts and Methods. Kluwer Academic
Publishers, Dordrecht, 1993.
Sergio A. Pernice and Gerardo Oleaga. Divergence of perturbation theory:
Steps towards a convergent series. Physical Review D, 57:1144–1158, Jan
B I B L I O G R A P H Y 285
1998. D O I : 10.1103/PhysRevD.57.1144. URL http://dx.doi.org/10.
1103/PhysRevD.57.1144.
Itamar Pitowsky. The physical Church-Turing thesis and physical compu-
tational complexity. Iyyun, 39:81–99, 1990.
Itamar Pitowsky. Infinite and finite Gleason’s theorems and the logic of
indeterminacy. Journal of Mathematical Physics, 39(1):218–228, 1998.
D O I : 10.1063/1.532334. URL http://dx.doi.org/10.1063/1.532334.
G. N. Ramachandran and S. Ramaseshan. Crystal optics. In S. Flügge,
editor, Handbuch der Physik XXV/1, volume XXV, pages 1–217. Springer,
Berlin, 1961.
M. Reck and Anton Zeilinger. Quantum phase tracing of correlated
photons in optical multiports. In F. De Martini, G. Denardo, and Anton
Zeilinger, editors, Quantum Interferometry, pages 170–177, Singapore,
1994. World Scientific.
M. Reck, Anton Zeilinger, H. J. Bernstein, and P. Bertani. Experimental
realization of any discrete unitary operator. Physical Review Letters, 73:
58–61, 1994. D O I : 10.1103/PhysRevLett.73.58. URL http://dx.doi.org/
10.1103/PhysRevLett.73.58.
Michael Reed and Barry Simon. Methods of Mathematical Physics I:
Functional Analysis. Academic Press, New York, 1972.
Michael Reed and Barry Simon. Methods of Mathematical Physics II:
Fourier Analysis, Self-Adjointness. Academic Press, New York, 1975.
Michael Reed and Barry Simon. Methods of Modern Mathematical Physics
IV: Analysis of Operators. Academic Press, New York, 1978.
Fred Richman and Douglas Bridges. A constructive proof of Gleason’s
theorem. Journal of Functional Analysis, 162:287–312, 1999. D O I :
10.1006/jfan.1998.3372. URL http://dx.doi.org/10.1006/jfan.
1998.3372.
Joseph J. Rotman. An Introduction to the Theory of Groups, volume 148 of
Graduate texts in mathematics. Springer, New York, fourth edition, 1995.
ISBN 0387942858.
Christiane Rousseau. Divergent series: Past, present, future . . .. preprint,
2004. URL http://www.dms.umontreal.ca/~rousseac/divergent.
pdf.
Rudy Rucker. Infinity and the Mind. Birkhäuser, Boston, 1982.
Richard Mark Sainsbury. Paradoxes. Cambridge University Press, Cam-
bridge, United Kingdom, third edition, 2009. ISBN 0521720796.
Dietmar A. Salamon. Funktionentheorie. Birkhäuser, Basel, 2012. D O I :
10.1007/978-3-0348-0169-0. URL http://dx.doi.org/10.1007/
978-3-0348-0169-0. see also URL http://www.math.ethz.ch/ sala-
mon/PREPRINTS/cxana.pdf.
286 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
Günter Scharf. Finite Quantum Electrodynamics: The Causal Approach.
Springer, Berlin, Heidelberg, second edition, 1989, 1995.
Leonard I. Schiff. Quantum Mechanics. McGraw-Hill, New York, 1955.
Maria Schimpf and Karl Svozil. A glance at singlet states and four-partite
correlations. Mathematica Slovaca, 60:701–722, 2010. ISSN 0139-9918.
D O I : 10.2478/s12175-010-0041-7. URL http://dx.doi.org/10.2478/
s12175-010-0041-7.
Erwin Schrödinger. Quantisierung als Eigenwertproblem. An-
nalen der Physik, 384(4):361–376, 1926. ISSN 1521-3889. D O I :
10.1002/andp.19263840404. URL http://dx.doi.org/10.1002/andp.
19263840404.
Erwin Schrödinger. Discussion of probability relations between separated
systems. Mathematical Proceedings of the Cambridge Philosophical
Society, 31(04):555–563, 1935a. D O I : 10.1017/S0305004100013554. URL
http://dx.doi.org/10.1017/S0305004100013554.
Erwin Schrödinger. Die gegenwärtige Situation in der Quantenmechanik.
Naturwissenschaften, 23:807–812, 823–828, 844–849, 1935b. D O I :
10.1007/BF01491891, 10.1007/BF01491914, 10.1007/BF01491987. URL
http://dx.doi.org/10.1007/BF01491891,http://dx.doi.org/10.
1007/BF01491914,http://dx.doi.org/10.1007/BF01491987.
Erwin Schrödinger. Probability relations between separated systems.
Mathematical Proceedings of the Cambridge Philosophical Society, 32
(03):446–452, 1936. D O I : 10.1017/S0305004100019137. URL http:
//dx.doi.org/10.1017/S0305004100019137.
Erwin Schrödinger. Nature and the Greeks. Cambridge University Press,
Cambridge, 1954.
Erwin Schrödinger. The Interpretation of Quantum Mechanics. Dublin
Seminars (1949-1955) and Other Unpublished Essays. Ox Bow Press,
Woodbridge, Connecticut, 1995.
Laurent Schwartz. Introduction to the Theory of Distributions. University
of Toronto Press, Toronto, 1952. collected and written by Israel Halperin.
J. Schwinger. Unitary operators bases. In Proceedings of the National
Academy of Sciences (PNAS), volume 46, pages 570–579, 1960. D O I :
10.1073/pnas.46.4.570. URL http://dx.doi.org/10.1073/pnas.46.4.
570.
R. Sherr, K. T. Bainbridge, and H. H. Anderson. Transmutation of mer-
cury by fast neutrons. Physical Review, 60(7):473–479, Oct 1941. D O I :
10.1103/PhysRev.60.473. URL http://dx.doi.org/10.1103/PhysRev.
60.473.
Raymond M. Smullyan. What is the Name of This Book? Prentice-Hall,
Inc., Englewood Cliffs, NJ, 1992a.
B I B L I O G R A P H Y 287
Raymond M. Smullyan. Gödel’s Incompleteness Theorems. Oxford Univer-
sity Press, New York, New York, 1992b.
Ernst Snapper and Robert J. Troyer. Metric Affine Geometry. Academic
Press, New York, 1971.
Alexander Soifer. Ramsey theory before ramsey, prehistory and early
history: An essay in 13 parts. In Alexander Soifer, editor, Ramsey Theory,
volume 285 of Progress in Mathematics, pages 1–26. Birkhäuser Boston,
2011. ISBN 978-0-8176-8091-6. D O I : 10.1007/978-0-8176-8092-3_1. URL
http://dx.doi.org/10.1007/978-0-8176-8092-3_1.
Thomas Sommer. Verallgemeinerte Funktionen. unpublished
manuscript, 2012.
Ernst Specker. Die Logik nicht gleichzeitig entscheidbarer Aussagen. Di-
alectica, 14(2-3):239–246, 1960. D O I : 10.1111/j.1746-8361.1960.tb00422.x.
URL http://dx.doi.org/10.1111/j.1746-8361.1960.tb00422.x.
Gilbert Strang. Introduction to linear algebra. Wellesley-Cambridge
Press, Wellesley, MA, USA, fourth edition, 2009. ISBN 0-9802327-1-6. URL
http://math.mit.edu/linearalgebra/.
Robert Strichartz. A Guide to Distribution Theory and Fourier Transforms.
CRC Press, Boca Roton, Florida, USA, 1994. ISBN 0849382734.
Karl Svozil. Conventions in relativity theory and quantum mechanics.
Foundations of Physics, 32:479–502, 2002. D O I : 10.1023/A:1015017831247.
URL http://dx.doi.org/10.1023/A:1015017831247.
Karl Svozil. Computational universes. Chaos, Solitons & Fractals, 25(4):
845–859, 2006a. D O I : 10.1016/j.chaos.2004.11.055. URL http://dx.doi.
org/10.1016/j.chaos.2004.11.055.
Karl Svozil. Are simultaneous Bell measurements possible? New Journal
of Physics, 8:39, 1–8, 2006b. D O I : 10.1088/1367-2630/8/3/039. URL
http://dx.doi.org/10.1088/1367-2630/8/3/039.
Alfred Tarski. Der Wahrheitsbegriff in den Sprachen der deduktiven
Disziplinen. Akademie der Wissenschaften in Wien. Mathematisch-
naturwissenschaftliche Klasse, Akademischer Anzeiger, 69:9–12, 1932.
Nico M. Temme. Special functions: an introduction to the classical func-
tions of mathematical physics. John Wiley & Sons, Inc., New York, 1996.
ISBN 0-471-11313-1.
Nico M. Temme. Numerical aspects of special functions. Acta Numerica,
16:379–478, 2007. ISSN 0962-4929. D O I : 10.1017/S0962492904000077.
URL http://dx.doi.org/10.1017/S0962492904000077.
Gerald Teschl. Ordinary Differential Equations and Dynamical Systems.
Graduate Studies in Mathematics, volume 140. American Mathematical
Society, Providence, Rhode Island, 2012. ISBN ISBN-10: 0-8218-8328-3
288 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
/ ISBN-13: 978-0-8218-8328-0. URL http://www.mat.univie.ac.at/
~gerald/ftp/book-ode/ode.pdf.
James F. Thomson. Tasks and supertasks. Analysis, 15:1–13, October 1954.
T. Toffoli. The role of the observer in uniform systems. In George J.
Klir, editor, Applied General Systems Research, Recent Developments and
Trends, pages 395–400. Plenum Press, New York, London, 1978.
William F. Trench. Introduction to real analysis. Free Hyperlinked Edition
2.01, 2012. URL http://ramanujan.math.trinity.edu/wtrench/
texts/TRENCH_REAL_ANALYSIS.PDF.
A. M. Turing. On computable numbers, with an application to the
Entscheidungsproblem. Proceedings of the London Mathematical
Society, Series 2, 42, 43:230–265, 544–546, 1936-7 and 1937. D O I :
10.1112/plms/s2-42.1.230, 10.1112/plms/s2-43.6.544. URL http:
//dx.doi.org/10.1112/plms/s2-42.1.230,http://dx.doi.org/
10.1112/plms/s2-43.6.544.
John von Neumann. Über Funktionen von Funktionaloperatoren. An-
nals of Mathematics, 32:191–226, 1931. URL http://www.jstor.org/
stable/1968185.
John von Neumann. Mathematische Grundlagen der Quantenmechanik.
Springer, Berlin, 1932. English translation in Ref. 23. 23
Stan Wagon. The Banach-Tarski Paradox. Cambridge University Press,
Cambridge, 1986.
Klaus Weihrauch. Computable Analysis. An Introduction. Springer, Berlin,
Heidelberg, 2000.
Gabriel Weinreich. Geometrical Vectors (Chicago Lectures in Physics). The
University of Chicago Press, Chicago, IL, 1998.
David Wells. Which is the most beautiful? The Mathematical Intelligencer,
10:30–31, 1988. ISSN 0343-6993. D O I : 10.1007/BF03023741. URL http:
//dx.doi.org/10.1007/BF03023741.
Hermann Weyl. Philosophy of Mathematics and Natural Science. Prince-
ton University Press, Princeton, NJ, 1949.
John Archibald Wheeler and Wojciech Hubert Zurek. Quantum Theory
and Measurement. Princeton University Press, Princeton, NJ, 1983.
E. T. Whittaker and G. N. Watson. A Course of Modern Analysis.
Cambridge University Press, Cambridge, 4th edition, 1927. URL
http://archive.org/details/ACourseOfModernAnalysis. Reprinted
in 1996. Table errata: Math. Comp. v. 36 (1981), no. 153, p. 319.
Eugene P. Wigner. The unreasonable effectiveness of mathematics
in the natural sciences. Richard Courant Lecture delivered at New
York University, May 11, 1959. Communications on Pure and Applied
B I B L I O G R A P H Y 289
Mathematics, 13:1–14, 1960. D O I : 10.1002/cpa.3160130102. URL
http://dx.doi.org/10.1002/cpa.3160130102.
Herbert S. Wilf. Mathematics for the physical sciences. Dover, New
York, 1962. URL http://www.math.upenn.edu/~wilf/website/
Mathematics_for_the_Physical_Sciences.html.
W. K. Wootters and B. D. Fields. Optimal state-determination by mutually
unbiased measurements. Annals of Physics, 191:363–381, 1989. D O I :
10.1016/0003-4916(89)90322-9. URL http://dx.doi.org/10.1016/
0003-4916(89)90322-9.
B. Yurke, S. L. McCall, and J. R. Klauder. SU(2) and SU(1,1) inter-
ferometers. Physical Review A, 33:4033–4054, 1986. URL http:
//dx.doi.org/10.1103/PhysRevA.33.4033.
Anton Zeilinger. The message of the quantum. Nature, 438:743, 2005.
D O I : 10.1038/438743a. URL http://dx.doi.org/10.1038/438743a.
Konrad Zuse. Rechnender Raum. Friedrich Vieweg & Sohn, Braunschweig,
1969.
Konrad Zuse. Discrete mathematics and Rechnender Raum. 1994. URL
http://www.zib.de/PaperWeb/abstracts/TR-94-10/.
Index
Abel sum, 21, 250, 256
Abelian group, 119
absolute value, 128, 175
adjoint operator, 197
adjoint identities, 150, 157
adjoint operator, 66
adjoints, 66
affine transformations, 117
Alexandrov’s theorem, 118
analytic function, 129
antiderivative, 171
antisymmetric tensor, 107
associated Laguerre equation, 246
associated Legendre polynomial, 239
Babylonian “proof”, 26
basis, 40
basis change, 58
basis of induction, 27
Bell state, 82, 104, 272
Bessel equation, 227
Bessel function, 230
beta function, 210
BibTeX, 16
binomial theorem, 25
block, 264
Bohr radius, 245
Borel sum, 255
Borel summable, 255
Born rule, 50, 261
boundary value problem, 185
bra vector, 35
branch point, 139
canonical identification, 51
Cartesian basis, 40, 129
Cauchy principal value, 165
Cauchy’s differentiation formula, 131
Cauchy’s integral formula, 131
Cauchy’s integral theorem, 131
Cauchy-Riemann equations, 129
Cayley’s theorem, 123
change of basis, 58
characteristic equation, 73
characteristic exponents, 216
Chebyshev polynomial, 202, 230
cofactor, 65
coherent superposition, 36, 260
column rank of matrix, 63
column space, 64
commutator, 53
completeness, 43
complex analysis, 127
complex numbers, 128
complex plane, 128
conformal map, 130
conjugate transpose, 39, 56
context, 264
continuity of distributions, 152
contravariant coordinates, 49
contravariant order, 100
contravariant vector, 100
convergence, 249
coordinate system, 40
covariant coordinates, 49
covariant order, 100
covariant vector, 100
cross product, 108
curl, 108
d’Alambert reduction, 216
D’Alembert operator, 108
decomposition, 80, 81
degenerate eigenvalues, 74
delta function, 158, 164
delta sequence, 158
delta tensor, 107
determinant, 64
diagonal matrix, 47
differentiable, 129
differential equation, 183
dilatation, 117
dimension, 41, 120
Dirac delta function, 158
direct sum, 55
direction, 36
Dirichlet boundary conditions, 195
Dirichlet integral, 173
Dirichlet’s discontinuity factor, 172
distribution, 151
distributions, 159
divergence, 108, 249
divergent series, 249
domain, 197
dot product, 38
double dual space, 51
double factorial, 210
dual basis, 47
dual space, 46, 152
dual vector space, 46
eigenfunction, 148, 185
eigenfunction expansion, 148, 164, 185,
186
eigensystem, 72
eigenvalue, 72, 185
eigenvector, 62, 72, 148
Einstein summation convention, 50
entanglement, 53, 104, 105
entire function, 140
Euler differential equation, 250
Euler identity, 128
Euler integral, 210
Euler’s formula, 128, 146
exponential Fourier series, 146
extended plane, 129
292 M AT H E M AT I C A L M E T H O D S O F T H E O R E T I C A L P H Y S I C S
factor theorem, 141
field, 35
form invariance, 101
Fourier analysis, 185, 188
Fourier inversion, 146, 156
Fourier series, 144
Fourier transform, 143, 147
Fourier transformation, 146, 156
frame, 40
Frobenius method, 213
Fuchsian equation, 207, 210
function theory, 127
functional analysis, 159
functional spaces, 143
Functions of normal transformation, 79
Fundamental theorem of affine geome-
try, 118
fundamental theorem of algebra, 141
gamma function, 207
Gauss series, 226
Gauss theorem, 229
Gauss’ theorem, 111
Gaussian differential equation, 227
Gaussian function, 147, 155
Gaussian integral, 147, 155
Gegenbauer polynomial, 202, 230
general Legendre equation, 239
general linear group, 121
generalized Cauchy integral formula, 131
generalized function, 151
generating function, 235
generator, 120
generators, 120
geometric series, 250, 254
Gleason’s theorem, 261
gradient, 108
Gram-Schmidt process, 44, 233
Grassmann identity, 109
Greechie diagram, 86, 264
group theory, 119
harmonic function, 239
Hasse diagram, 264
Heaviside function, 173, 238
Heaviside step function, 171
Hermite expansion, 148
Hermite functions, 148
Hermite polynomial, 148, 202, 230
Hermitian adjoint, 39, 56
Hermitian conjugate, 39, 56
Hermitian operator, 67, 71
Hilbert space, 39
holomorphic function, 129
homogenuous differential equation, 184
hypergeometric differential equation,
227
hypergeometric equation, 207, 227
hypergeometric function, 207, 226, 239
hypergeometric series, 226
imaginary numbers, 127
imaginary unit, 128
incidence geometry, 117
inductive step, 27
infinite pulse function, 171
inhomogenuous differential equation,
183
initial value problem, 185
inner product, 38, 44, 90, 143, 233
inverse operator, 53
irregular singular point, 211
isometry, 69, 262
Jacobi polynomial, 230
Jacobian matrix, 92, 97
ket vector, 35
Kochen-Specker theorem, 86
Kronecker delta function, 39, 42, 93
Kronecker product, 52, 89
Laguerre polynomial, 202, 230, 247
Laplace operator, 108, 204
LaTeX, 16
Laurent series, 133, 213
Legendre equation, 234, 243
Legendre polynomial, 202, 230, 244
Legendre polynomials, 234
length, 36
Levi-Civita symbol, 107
Lie algebra, 121
Lie bracket, 121
Lie group, 120
linear combination, 58
linear functional, 46
linear independence, 37
linear manifold, 37
linear operator, 53
linear span, 38
linear superposition, 58
linear transformation, 53
linear vector space, 36
linearity of distributions, 152
Liouville normal form, 199
Liouville theorem, 140, 212
Lorentz group, 123
matrix, 54
matrix rank, 63
maximal operator, 83
measures, 85
meromorphic function, 141
metric, 47, 95, 96
metric tensor, 47, 93, 95
Minkowski metric, 97, 122
minor, 64
modulus, 128, 175
Moivre’s formula, 128
multi-valued function, 139
multifunction, 139
mutually unbiased bases, 61
nabla operator, 108, 130
Neumann boundary conditions, 195
non-Abelian group, 119
norm, 38
normal transformation, 76
null space, 64
order, 119
ordinary point, 211
orthogonal complement, 39
orthogonal functions, 233
orthogonal group, 121
orthogonal matrix, 121
orthogonal projector, 71, 86
orthogonal transformation, 69
orthogonality relations for sines and
cosines, 145
orthonormal, 43
orthonormal transformation, 69
othogonality, 39
partial differential equation, 203
partial fraction decomposition, 212
Pauli spin matrices, 54, 68, 120
periodic boundary conditions, 195
periodic function, 144
permutation, 68, 122
perpendicular projector, 71
Picard theorem, 141
Plemelj formula, 170, 174
I N D E X 293
Plemelj-Sokhotsky formula, 170, 174
Pochhammer symbol, 208, 226
Poincaré group, 122
polynomial, 53
positive transformation, 68
power series, 140
power series solution, 212
principal value, 165
principal value distribution, 166
probability measures, 85
product of transformations, 53
projection, 55
projection theorem, 39
projective geometry, 117
projective transformations, 117
projector, 42, 55
proper value, 72
proper vector, 72
pure state, 259
quantum logic, 262
quantum mechanics, 259
quantum state, 259
radius of convergence, 215
rank of matrix, 63
rank of tensor, 90
rational function, 140, 212, 226
reciprocal basis, 47
reduction of order, 216
reflection, 69
regular point, 211
regular singular point, 211
regularized Heaviside function, 173
representation, 120
residue, 133
Residue theorem, 135
Riemann differential equation, 212, 227
Riemann rearrangement theorem, 249
Riemann surface, 128
Riesz representation theorem, 50
Rodrigues formula, 234, 239
rotation, 69, 117
rotation group, 122
rotation matrix, 122
row rank of matrix, 63
row space, 63
scalar product, 36, 38, 44, 90, 143, 233
Schmidt coefficients, 82
Schmidt decomposition, 81
Schrödinger equation, 240
sekular determinant, 73
sekular equation, 73
self-adjoint transformation, 67
sheet, 139
shifted factorial, 208, 226
sign function, 175
similarity transformations, 118
sine integral, 172
singlet state, 272
singular point, 211
singular value decomposition, 81
singular values, 81
skewing, 117
smooth function, 149
Sokhotsky formula, 170, 174
span, 38, 41
special orthogonal group, 122
special unitary group, 122
spectral form, 77
Spectral theorem, 77
spectral theorem, 77
spectrum, 77
spherical coordinates, 98, 240, 269
spherical harmonics, 239
Spur, 65
standard basis, 40, 129
state, 259
states, 85
Stieltjes Integral, 253
Stirling’s formula, 210
Stokes’ theorem, 113
Sturm-Liouville differential operator, 196
Sturm-Liouville eigenvalue problem, 196
Sturm-Liouville form, 195
Sturm-Liouville transformation, 199
subgroup, 119
subspace, 37
sum of transformations, 53
symmetric group, 69, 122
symmetric operator, 67, 71
symmetry, 119
tempered distributions, 154
tensor product, 52
tensor rank, 90, 100
tensor type, 90, 100
three term recursion formula, 235
trace, 65
trace class, 66
transcendental function, 140
transformation matrix, 54
translation, 117
unit step function, 173
unitary group, 122
unitary matrix, 122
unitary transformation, 69, 262
vector, 36, 100
vector product, 108
weak solution, 150
Weierstrass factorization theorem, 140
weight function, 196, 233