1 Construction of Haar Measure Definition 1.1. A family G of linear transformations on a linear topological space X is said to be equicontinuous on a subset K of X if for every neighborhood V of the origin in X there is a neighborhood U of the origin such that the following condition holds if k 1 ,k 2 ∈ K and k 1 - k 2 ∈ U, then G(k 1 - k 2 ) ⊆ V that is T (k 1 - k 2 ) ∈ V for all T ∈ G. 1
357
Embed
1 Construction of Haar Measure - UCSD Mathematicsmath.ucsd.edu/~nwallach/haarmeasure.pdf · 1 Construction of Haar Measure Definition 1.1. A family G of linear transformations on
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1 Construction of Haar Measure
Definition 1.1. A family G of linear transformations on a linear
topological space X is said to be equicontinuous on a subset K of X if
for every neighborhood V of the origin in X there is a neighborhood U
of the origin such that the following condition holds
if k1, k2 ∈ K and k1 − k2 ∈ U, then G(k1 − k2) ⊆ V
that is T (k1 − k2) ∈ V for all T ∈ G.
1
Theorem 1.2 (Kakutani). Let K be a compact, convex subset of a
locally convex linear topological space X, and let G be a group of linear
mappings which is equicontinuous on K and such that G(K) ⊆ K.
Then there exists a point p ∈ K such that
T (p) = p ∀T ∈ G
Proof. By Zorn’s lemma, K contains a minimal non-void compact
convex subset K1 such that G(K1) ⊆ K1. If K1 contains just one point
then the proof is complete. If this is not the case, the compact set
K1 −K1 contains some point other than the origin.
2
Thus, there exists a neighborhood V of the origin such that
V 6⊇ K1 −K1.
There is a convex neighborhood V1 of the origin such that αV1 ⊆ V for
|α| ≤ 1.
By the equicontinuity of G on the set K1, there is a neighborhood U1 of
the origin such that if k1, k2 ∈ K1 and k1 − k2 ∈ U1 then
G(k1 − k2) ⊆ V1.
3
Because each T ∈ G is invertible, T maps open sets to open sets (open
mapping theorem) and T (A ∩B) = TA ∩ TB for any sets A,B.
Since T is linear,
T convex-hull(A) = convex-hullT (A)
for any set A.
Because G is a group, G(GA) = GA for any set A.
4
Thus
U2 := convex-hull(GU1 ∩ (K1 −K1))
= convex-hull(G(U1 ∩ (K1 −K1))) ⊆ V1
is relatively open in K1 −K1 and satisfies GU2 = U2 6⊇ K1 −K1. By
continuity, GU2 = U2. Define
∞ > δ := inf{a : a > 0, aU2 ⊇ K1 −K1} ≥ 1
and U := δU2. For each 0 < ε < 1,
(1 + ε)U ⊇ K1 −K1 6⊆ (1− ε)U .
5
The family of relatively open sets {2−1U + k}, k ∈ K1, is a covering of
K1. Let {2−1U + k1, . . . , 2−1U + kn} be a finite sub-covering and let
p = (k1 + . . . kn)/n. If k is any point in K1, then ki − k ∈ 2−1U for
some 1 ≤ i ≤ n. Since ki − k ∈ (1 + ε)U for all i and all ε > 0, we have
p ∈ 1n
(2−1U + (n− 1) · (1 + ε)U
)+ k.
For ε = 14(n−1) , we have p ∈ (1− 1
4n )U + k for each k ∈ K1. Let
K2 = K1 ∩⋂
k∈K1
((1− 1
4n)U + k
)6= ∅.
6
Because (1− 14n )U 6⊇ K1 −K1, we have K2 6= K1. The closed set K2
is clearly convex. Further since T (aU) ⊆ aU for T ∈ G, we have
T (aU + k) ⊆ aU + Tk for all T ∈ G, k ∈ K1.
Recalling TK1 = K1 for T ∈ G, we find that GK2 ⊆ K2, which
contradicts the minimality of K1.
7
Theorem 1.3 (Haar Measure). Let G be a compact group. Let C(G)be the space of continuous maps from G to C. Then, there is a unique
linear form
m : C(G) −→ C
having the following properties:
1. m(f) ≥ 0 for f ≥ 0 (m is positive).
2. m(11) = 1 (m is normalized).
3. m(sf) = m(f) where sf is defined as the function
sf(g) = f(s−1g) s, g ∈ G
(m is left invariant).
4. m(fs) = m(f) where fs(g) = f(gs) for s, g ∈ G (m is right
invariant).
8
Proof. For f ∈ C(G), let Cf denote the convex hull of all left translates
of f . The elements of Cf are finite sums of the form:
g(x) =∑
finite
aif(six) ai > 0,∑
finite
ai = 1
Clearly
||g|| = max{|g(x)| : x ∈ G} ≤ ||f ||
Thus all sets Cf (x) = {g(x) : g ∈ Cf} are bounded and relatively
compact in C. Since G is compact, f is uniformly continuous, namely
for all ε > 0, ∃ a neighborhood V = Vε of the identity element e ∈ Gsuch that:
y−1x ∈ V ⇒ |f(x)− f(y)| < ε
9
Since (s−1y)−1s−1x = y−1x, we also have
|sf(y)− sf(x)| < ε whenever y−1x ∈ V
Since the functions g are convex combinations of functions of the form
sf ,
|g(y)− g(x)| < ε whenever y−1x ∈ V
Thus the set Cf is equicontinuous. By Ascoli’s theorem Cf is relatively
compact in C(G). Define the compact convex set Kf = Cf in C(G).The compact group G acts by left translations (isometrically) on C(G)and leaves Cf and hence Kf invariant. By Kakutani’s Theorem 1.2,
there is a fixed point g of this action G in Kf . Such a fixed point
satisfies by definition
sg = g (∀s ∈ G) ⇒ g(s−1) = sg(e) = c (∀s ∈ G)
for some constant c.
10
By the definition of the set Kf , given any ε > 0 there exists a finite set
{s1, . . . sn} in G and ai > 0 such that
n∑1
ai = 1 and
∣∣∣∣∣c−n∑1
aif(six)
∣∣∣∣∣ < ε (∀x ∈ G) (1.1)
We first show that there is only one constant function Kf . Start the
same construction as above, only now using right translations of f (e.g.
we can apply the preceding construction to the opposite group G′ of G ,
or the function f ′ = f(x−1)), obtaining a relatively compact set C′f with
compact convex closure K ′f containing a constant function c′. It will be
enough to show c = c′.(all constants c in Kf must be equal to one
chosen constant c′ of K ′f and conversely.)
11
There is certainly a finite combination of right translates which is close
to c′ namely
|c′ −∑
bjf(xtj)| < ε ( for some tj ∈ G, bj > 0 with∑
bj = 1)
Let us multiply this inequality by ai and put x = si to get
|c′ai −∑
aibjf(sitj)| < εai (1.2)
Summing over i, we obtain
|c′∑
ai −∑i,j
aibjf(sitj)| < ε∑
ai = ε (1.3)
12
Operating symmetrically on Equation (1.1) (multiplying by bj , putting
x = tj and summing over j), we find:
|c−∑i,j
aibjf(sitj)| < ε (1.4)
Subtracting (or adding) Equation (1.3) from (1.4) we get |c− c′| < 2ε.Since ε was arbitrary this completes the proof.
From now on the constant c in Kf will be denoted by m(f). It is the
only constant function which can be approximated arbitrarily close with
convex combinations of left or right translates of f .
13
The following properties are obvious:
• m(11) = 1 since Kf = {1} if f = 1.
• m(f) ≥ 0 if f ≥ 0.
• m(af) = am(f) for any a ∈ C (since Kaf = Kf ).
• m(sf) = m(f) = m(fs) (by uniqueness)
The proof will be complete if we show that m is additive (hence linear).
Let us take f, g ∈ C(G) and start with Equation (1.1) above with
c = m(f). Further let
h(x) =∑
aig(six)
Since h ∈ Cg, we certainly have Ch ⊆ Cg whence Kh ⊆ Kg. But the set
Kg contains only one constant: m(h) = m(g).
14
We can write
|m(h)−∑
bjh(tjx)| < ε
for finitely many suitable tj ∈ G and bj > 0 with∑bj = 1. Using the
definition of h and m(h) = m(g), this implies
|m(g)−∑i,j
aibjg(sitjx)| < ε (1.5)
However multiplying Equation (1.1) by bj and replacing x by tjx and
summing over j we find
|m(f)−∑i,j
aibjf(sitjx)| < ε (1.6)
15
Adding Equation (1.5) and (1.6), this implies
|m(f) +m(g)−∑i,j
aibj(f + g)(sitjx)| < 2ε
Thus the constant m(f) +m(g) is in Kf+g. However note that the only
constant in this compact convex set is m(f + g). This completes the
proof.
16
1.1 Exercises
Exercise 1.4. Let m be the normalized Haar measure of a compact
group G. For f ∈ C(G) or L1(G) show that m(f) = m(f) where the
function f is defined as the function f(x) = f(x−1). This equality is
usually written as ∫G
f(x)dx =∫
G
f(x−1)dx
Hint: Observe that f → m(f) is a Haar measure on G and use the
uniqueness part of the Theorem on Haar measures, Theorem 1.3
17
Before stating the next exercise we need a definition
Definition 1.5 (Semidirect products). Let L be a group and assume
it contains a normal subgroup G and a subgroup H such that GH = L
and G ∩H = {e}. That is, suppose one can select exactly one element
h from each coset of G so that {h} forms a subgroup H. If H is also
normal then L is isomorphic with the direct product G×H. If H fails to
be normal, we can still reconstruct L if we know how the inner
automorphisms ρh behave on G. Namely for xj ∈ G and hj ∈ H(j = 1, 2), we have:
(x1h1)(x2h2) = x1h1x2h−11 h1h2 = (x1ρh1(x2))h1h2
18
The construction just given can be cast in an abstract form. Let G and
H be groups and suppose there is a homomorphism h→ τh which
carries H onto a group of automorphisms of G, namely τh ◦ τh′ = τhh′
for h, h′ ∈ H. Let GsH denote the cartesian product of G and H. For
(x, h) and (x′, h′) in GsH, define:
(x, h)(x′, h′) = (x (τh(x′)) , hh′)
Then GsH is a group; it is called a semidirect product of G and H. Its
identity is (e1, e2) where e1 and e2 are the identities of G and H
respectively. The inverse of (x, h) is (τh−1(x−1), h−1). Let
G1 := {(x, e2) : x ∈ G}
and
H1 := {(e1, h) : h ∈ H}
19
Then G1 is a normal subgroup of GsH and H1 is a subgroup. Since
(e1, h) · (x, e2) · (e1, h)−1 = (τh(x), e2)
the inner automorphism ρ(e1,h) for (e1, h) ∈ H1 reproduces the action τh
on G. Thus every semidirect product is obtained by the process
described in the previous paragraphs.
20
Exercise 1.6. Let G and H be compact groups and let GsHbe a
semidirect product of G and H . Suppose also that the mapping
(x, h)→ τh(x) is a continuous mapping of G×H onto G . In particular,
each τh is a homeomorphism of G onto itself. Show that the semidirect
product GsHwith the product topology is a compact group. What is
the Haar measure on GsH in terms of the Haar measures on G and H ?
21
Exercise 1.7. Let On(R) be the group of n× n orthogonal matrices.
Suppose that Zij , 1 ≤ i ≤ j ≤ n are i.i.d. standard normal random
variables. Let U be the random orthogonal matrix with rows obtained by
applying the Gram-Scmidt process to the vectors (Z11, . . . , Z1n), . . .,(Zn1, . . . Znn). Show that U is distributed according to the Haar
measure on On(R).
22
2 Representations, General Constructions
For E , a complex Banach space, let Gl(E) denote the group of
continuous isomorphisms of E onto itself. A representation π of a
compact group G in E is a homomorphism π:
π : G −→ Gl(E)
for which all the maps G→ E defined as s→ π(s)v (v ∈ E) are
continuous. The space E = Eπ in which the representation takes place
is called the representation space of π. A representation π of a group
G in a vector space E canonically defines an action (also denoted by π)
π : G× E −→ E
(s, v) −→ π(s)v
23
The definition requires this action to be separately continuous. The
action is then automatically globally continuous.
We say that a representation π is unitary when E =H , is a Hilbert
space and each operator π(s) (s ∈ G) is a unitary operator (i.e. each
π(s) is isometric and surjective). Thus π is unitary when E =H is a
Hilbert space and
π(s)∗ = π(s)−1 = π(s−1) (s ∈ G)
The representation π of G in E is said to be irreducible when E and
{0} are distinct and are the only two closed invariant subspaces under all
operators π(s) (s ∈ G) (topological irreducibility).
24
Two representations π and π′ of the same group G are called
equivalent when the two spaces over which they act are G -isomorphic,
namely there exists a continuous isomorphism A : E → E′ of their
respective spaces with
A(π(s)v) = π′(s)Av (s ∈ G, v ∈ E)
More generally, continuous linear operators A : E → E′ satisfying all
commutation relations A(π(s)) = π′(s)A for all s ∈ G are called
intertwining operators or G -morphisms (from π to π′) and their set is
a vector space denoted either by
HomG(E,E′) or Hom(π,π′)
25
Proposition 2.8. Let π be a unitary representation of G in the Hilbert
space H . If H1 is an invariant subspace of H (with respect to all
operators π(s), s ∈ G), then the orthogonal space H2 = H⊥1 of H1 in
H is also invariant.
Proof. We need to show that if v ∈ H, v ⊥ H1 then π(s)v is also
orthogonal to H1 for all s in G . For any x ∈ H1,
〈x,π(s)v〉 = 〈π(s)∗x, v〉 = 〈π(s−1)x, v〉 = 0
since by assumption π(s−1)x also lies in H1.
26
Proposition 2.9. Let π be a representation of a compact group G in a
Hilbert space H . Then there exists a positive definite hermitian form ϕ
which is invariant under the G -action, and which defines the same
topological structure on H .
Proof. By continuity of the mappings s→ π(s)v, the mappings
s −→ 〈π(s)v,π(s)w〉 (v, w ∈ H)
are also continuous (by continuity of scalar product in H ×H). We can
thus define
ϕ(v, w) =∫
G
〈π(s)v,π(s)w〉ds
using the Haar integral.
27
It is clear that ϕ is hermitian and positive. Let us show that it is
non-degenerate and defines the same topology on H . Since G is
compact, π(G) is also compact in Gl(H) (with the strong topology). In
particular, π(G) is simply bounded and thus uniformly bounded (uniform
boundedness principle ≡ Banach-Steinhaus theorem). Thus, there exists
a positive constant M > 0 with
||π(s)v|| ≤M ||v|| (∀s ∈ G, v ∈ H)
This implies
||v|| = ||π(s−1)π(s)v|| ≤M ||π(s)v|| ≤M2||v||
Thus
M−1||v|| ≤ ||π(s)v|| ≤M ||v||
28
Squaring and Integrating over G , we find
M−2||v||2 ≤ ϕ(v, v) ≤M2||v||2
Thus ϕ(v, v) = 0 implies ||v|| = 0 and v = 0. Thus ϕ and || · ||2 induce
equivalent topologies (equivalent norms) on H . Invariance of ϕ comes
from the invariance of the Haar measure.
ϕ(π(t)v,π(t)w) =∫
G
〈π(st)v,π(st)w〉ds =∫
G
f(st)ds
=∫
G
ft(s)ds =∫
G
f(s)ds = ϕ(v, w)
This shows that π is ϕ-unitary as desired.
These propositions imply any representation of a compact group in a
Hilbert space is equivalent to a unitary one, and any finite dimensional
representation (the dimension of a representation is the dimension of its
rep. space) is completely reducible (direct sum of irreducible ones.)
29
Definition 2.10 (left translations). In any space of functions on G ,
define the left translations by
[l(s)f ](x) = f(s−1x)
(If we do not want to identify elements of Lp(G) with functions or
classes of functions, we can simply extend translations from C(G) to
Lp(G) by continuity).
Thus we have
l(s) ◦ l(t) = l(st)
and we get homomorphisms
l : G→ Gl(E), s→ l(s)
with any E = Lp(G), 1 ≤ p <∞.
Exercise 2.11. Check that these homomorphisms are continuous in the
representation sense.
30
The above were the left regular representations of G. The right
regular representations of G in the Banach space Lp(G) are defined
similarly with
[r(s)f ](x) = f(xs) (f ∈ Lp(G))
With this definition, one has r(s) ◦ r(t) = r(st).
One can also consider the biregular representations of l × r of G×Gin Lp(G) defined as
[l × r(s, t)f ](x) = f(s−1xt) (f ∈ Lp(G))
and its restriction to the diagonal G→ G×G, s→ (s, s) which is the
adjoint representation of G . It is defined as
[Ad(s)f ](x) = f(s−1xs) (f ∈ Lp(G))
The regular representations are faithful, i.e π(s) = 11⇔ s = e
31
Let π : G→ Gl(E) and π′ : G′ → Gl(E′) be two representations. We
can define the external direct sum representation of G×G′ in E ⊕ E′
by
π ⊕ π′(s, s′) = π(s)⊕ π′(s′) (s ∈ G, s′ ∈ G′)
When G = G′, we can restrict this external direct sum to the diagonal
G of G×G, obtaining the usual direct sum of πand π′
π ⊕ π′ : G → Gl(E ⊕ E′)
s → π(s)⊕ π′(s)
The external tensor product π ⊗ π′ as a representation of G×G′ in
E × E′is defined as
π ⊗ π′(s, s′) = π(s)⊗ π′(s′) (s ∈ G, s′ ∈ G′)
32
We assume the two spaces E ,E′ are finite dimensional, thus this
algebraic tensor product is complete; in general some completion has to
be devised.
The usual tensor product of two representations of the same group
G is the restriction to the diagonal of the external tensor product
(G = G′) and is given by
π ⊗ π′(s) = π(s)⊗ π′(s) (s ∈ G)
33
For a given finite dimensional representation π : G→ Gl(E), define the
contragredient representation π. This representation acts in the dual
E′ of E (namely the space of linear forms on E) and
π(s) = tπ(s−1) (s ∈ G)
Since transposition reverses the order of composition of mappings,
namely t(AB) = tBtA, it is necessary to reverse the operations by
taking the inverse in the group. The above construction allows us to
conclude that π(st) = π(s)π(t) as is required for a representation.
34
Conjugate representation π: When E =H is a Hilbert space the
conjugate π of π is a representation acting on the conjugate H of H .
Recall that H has the same underlying additive group as H , but with
the scalar multiplication in H twisted by complex conjugation, namely
the external operation of scalars is given by
(a, v) −→ a · v = av ( we use a dot in H )
The inner product 〈·, ·〉− of H is defined as
〈v, w〉− = 〈v, w〉 = 〈w, v〉
This suggests that an element v ∈ H is written as v when we consider it
as an element of the dual Hilbert space H . With this notation we have:
av = a · v (a ∈ C) and 〈v, w〉− = 〈v, w〉
35
The identity map H → H , v → v is an anti-isomorphism. The
conjugate of π is defined as π(s) = π(s) in H . Since the (complex
vector) subspaces of H and H are the same by definition, π and π are
reducible or irreducible simultaneously. However it is important to
distinguish these two representations (in particular they are not always
equivalent). Any orthonormal basis (ei) of H is also an orthonormal
basis of H , but a decomposition v =∑viei in H gives rise to the
decomposition
v =∑
viei (complex conjugate components in H )
Thus the matrix representations associated with π and π are complex
conjugate to one another.
Exercise 2.12. Show that when π is unitary and finite dimensional, the
contragredient π and the conjugate π of π are equivalent.
36
2.1 Exercises
Exercise 2.13. Show that the left and right representations l and r of a
group G (in any Lp(G) space) are equivalent.
Exercise 2.14. If πand π′ are two representations of the same group
G (acting in respective Hilbert spaces H and H ′), show that the matrix
coefficients of π ⊗ π′ (with respect to bases (ei) in H and (e′j) in H ′
and ei ⊗ e′j in H ⊗H ′) are products of matrix coefficients of πand π′
(Kronecker product of matrices).
Exercise 2.15. Let 11n denote the identity representation of the group
G in dimension n (the space of this identity representation is thus Cn
and 11n(s) = idCn for all s ∈ G). Show that for any representation πof
G ,
π ⊗ 11n is equivalent to π ⊕ π ⊕ · · · ⊕ π (n terms)
37
Exercise 2.16. (Schur’s lemma) Let k be an algebraically closed field,
V a finite dimensional vector space over k and Φ any irreducible set of
operators in V (the only invariant subspaces, relatively to all operators
belonging to Φ are V and {0}). Then, if an operator A commutes with
all operators in Φ, A is a multiple of the identity operator (i.e. A is a
scalar operator).
Hint: Take an eigenvalue a in the algebraically closed field k and
consider A− a · I, which still commutes with all operators of Φ. Show
that the Ker(A− a · I)(6= {0}) is an invariant subspace.
38
3 Finite dimensional representations of
compact groups (Peter-Weyl theorem)
Theorem 3.17 (Peter-Weyl). Let G be a compact group. for any
s 6= e in G , there exists a finite dimensional irreducible representation
π of G such that π(s) 6= 11.
Proof. We start with two Lemmas.
Lemma 3.18. Let G be a compact group, k : G×G→ C a continuous
function and K : L2 → C(G) the operator with kernel k, namely:
(Kf)(x) =∫
G
k(x, y)f(y)dy
Then K is a compact operator. Moreover if k(x, y) = k(y, x) identically
on G×G, K is a Hermitian as an operator from L2(G) to C(G).
39
Lemma 3.19. Let K be a compact Hermitian operator (in some Hilbert
space H ). Then the spectrum S of K consists of eigenvalues. Each
eigenspace Hλ with respect to a non-zero eigenvalue λ ∈ S is finite
dimensional and the number of eigenvalues outside any neighborhood of
0 is finite. Moreover, S ⊆ R and
||K|| = sup{|λ| : λ ∈ S}
and the eigenspaces associated to distinct eigenvalues are orthogonal, i.e
Hλ ⊥ Hµ for λ 6= µ in S
40
Proof of Theorem 3.17: Assume that s 6= e in G and take an open
symmetric neighborhood V = V −1 of e in G such that s /∈ V 2. There
exists a positive continuous function f such that
f(e) > 0 , f(x) = f(x−1) = f(x) , Supp(f) ⊆ V
where Supp(f) denotes the support of f , namely the complement of the
largest open set on which f vanishes. Consider the function ϕ = f ∗ fdefined as
ϕ(x) =∫
G
f(y)f(y−1x)dy
The support of ϕ is contained in V 2 and
ϕ(s) = 0 (s /∈ V 2) , ϕ(e) = ||f ||2 > 0.
41
We also see that l(s)ϕ 6= ϕ. But the operator K with kernel
k(x, y) = f(y−1x) is compact (see Lemma 3.18) and the convergence in
quadratic mean of
f = f0 +∑
fi , fi ∈ Ker(K − λi) = Hi (λi ∈ Spec(K))
implies that
ϕ = Kf =∑
Kfi =∑
λifi
where
fi =1λiKfi ∈ Im(K) ⊆ C(G)
where we have uniform convergence holding in the series above. Since
l(s)ϕ 6= ϕ, we must have l(s)fi 6= fi for at least one index i. However
the definition of the kernel k shows that
k(sx, sy) = k(x, y) = f(y−1x) (s, x, y ∈ G)
42
The consequence of these identities is the translation invariance of all
the eigenspaces Hi of K. The left regular representation restricted to a
suitable finite dimensional subspace Hi (for any i, with l(s)fi 6= fi) will
furnish an example of a finite dimensional representation π with
π(s) 6= e.
The corollaries of this theorem are numerous and important.
Corollary 3.20. A compact group is commutative if and only if all its
finite dimensional irreducible representations have dimension 1.
Proof. Exercise.
43
Corollary 3.21 (Peter-Weyl). Any continuous function on a compact
group is a uniform limit of (finite) linear combinations of coefficients of
irreducible representations.
Proof. Let π be a (finite dimensional) irreducible representation of the
compact group G and take a basis in the representation space of π in
order to be able to identify in π : G→ Gln(C), the coefficients of π
being the continuous functions on G defined as
cij : g −→ cij(g) = 〈ei,π(g)ej〉
More generally if u and v are elements of H, we can define the
(function) coefficient cuv of π on G by
g −→ cuv (g) = 〈u,π(g)v〉
44
These functions are obviously finite linear combinations of the previously
defined matrix coefficients cij . Introduce the subspace V (π) of C(G)spanned by the cij , or equivalently by all cuv for v, u ∈ Hπ . Observe that
the subspaces of C(G) attached in this way to two equivalent
representations π and π′ coincide namely, V (π) = V (π′). Thus we can
form the algebraic sum (a priori this algebraic sum is not a direct sum)
AG =⊕
Vπ ⊆ C(G)
where the summation index π runs over all (classes of) finite
dimensional irreducible representations of G . The corollary can be
restated in the following form:
AG is a dense subspace of the Banach space C(G) in the
uniform norm
45
But this algebraic sum AG is a subalgebra of C(G) (the product of two
continuous functions being the usual pointwise product). The product of
the coefficients
cuv of π and γst of σ
is a coefficient of the representation π ⊗ σ (the coefficient of this
representation with respect to the two vectors u⊗ s and v ⊗ t). Taking
π and σ to be finite dimensional representations of G , π ⊗ σ will be
finite dimensional, hence completely reducible and all its coefficients (in
particular the product of cvu and γst ) are finite linear combinations of
coefficients of (finite dimensional) irreducible representations of G .
This subalgebra AG of C(G) contains the constants, is stable under
complex conjugation (because π is irreducible precisely when π is
irreducible) and separates points of G by the main Theorem 3.17. By
the Stone-Weistrass theorem the proof is complete.
46
3.1 Exercises
Exercise 3.22. Let G be a compact totally discontinuous group. Show
that AG is the algebra of all locally constant functions on G . (Observe
that a locally constant function on G is uniformly locally constant, hence
can be identified with a function on a quotient G/H where H is some
open subgroup of G . Conversely any finite dimensional representation
of G must be trivial on an open subgroup H of G . )
Exercise 3.23. Let G be any compact group. Show that AG consists of
the continuous functions f on G for which the left and right translates
of f generate a finite dimensional subspace of C(G). In particular if G1
and G2 are two compact groups, any continuous homomorphism
h : G1 → G2 has a transpose th : A2 → A1 where Ai = AGi , defined by
th(f) = f ◦ h. A priori this transpose is a linear mapping
th : C(G2)→ C(G1).
47
Exercise 3.24. Let G = Un(C) with its canonical representation π in
V = Cn. Since π is unitary, we can identify π with the contragredient
of π : it acts in the dual V ∗ of V .
(a) Let Apq denote the space of linear combinations of coefficients of the
representation
πpq = π⊗p ⊗ π⊗q in (V ∗)⊗p ⊗ V ⊗q = T p
q (V )
Prove that the sum of the subspaces Apq of C(G)is an algebra A (show
that ApqA
rs ⊆ Apr
qs), stable under conjugation (show that Apq = Aq
p),
which separates the points of G . Using the Stone-Weierstrass theorem,
conclude that A is dense in C(G).(b) Show that A = AG. (use part(a) to prove that any irreducible
representation of G appears as a subrepresentation of some πpq , or in
other words can be realized on a space of mixed tensors.)
48
Exercise 3.25. Let G be a closed subgroup of Un(C). Using the fact
that any finite dimensional representation of G appears in the restriction
of some finite dimensional representation of Un(C) (this is a
consequence of the theory of induced representations), show that G is a
real algebraic subvariety of Un(C). (The transpose of the embedding
G ↪→ Un(C) is the operation of restriction on polynomial functions and
is surjective. Hence AG is a quotient of the polynomial algebra A of
Un(C). By the exercise 3.24, A is generated by the coordinate functions
Un(C) −→ C , x = (xij) 7−→ xi
j
and their conjugates. )
49
4 Decomposition of the regular
representation
Lemma 4.1. Let V ⊆ L2(G) be a finite dimensional subspace which is
invariant under the right regular representation of G. Then V consists of
(classes of) continuous functions and each f ∈ V can be written as
f(x) = Tr(Aπ(x)) for some A ∈ End(V )
Here π denotes the restriction of the right regular representation to V .
50
Proof. Take an orthonormal basis (χi) of V and recall the coefficients cijof π defined as
π(x)χi =∑
j
cji (x)χj , x ∈ G
For f =∑
i aiχi, we have
π(x)f =∑
i
aiπ(x)χi =∑i,j
aicji (x)χj
Hence
f(x) = [r(x)f ](e) =∑i,j
cji (x)aij (with ai
j = aiχj(e))
Thus,
f(x) = Tr(Aπ(x))
as claimed.
51
Let (π, V ) be any finite dimensional representation of the compact
group G. For any endomorphism A ∈ End(V), we define the
corresponding coefficient cA of π by cA(x) = Tr(A · π(x)). The right
translates of these coefficients are easily identified as
[r(s)cA](x) = cA(xs) = Tr(Aπ(x)π(s))
= Tr(π(s) ·Aπ(x)) = Tr(Bπ(x)) = cB(x)
where B = π(s) ·A.
Consider the representation of G in End(V ) defined by
lπ(s)A = π(s) ·A (s ∈ G,A ∈ End(V ))
The above computations show that A→ cA is a G-morphism
c : End(V ) −→ C(G) ⊆ L2(G)
(intertwining lπ and r.)
52
Suppose now that (π, V ) is an irreducible finite dimensional
representation of the compact group G.
Write L2(G,π) for the image of End(V ) under the map c. Note that
the vector space L2(G,π) only depends on the equivalence class of π.
The representation (lπ ,End(V )) is equivalent to π ⊕ · · · ⊕ π (n times,
where n = dim(V )) and L2(G,π) is r-invariant, so the restriction of r
to L2(G,π) is equivalent to π ⊕ · · · ⊕ π (m times for some m ≤ n).
Thus, L2(G,π) has dimension mn ≤ n2.
If V ′ is a r-invariant subspace of L2(G) such that the restriction of r to
V ′ is equivalent to π, then V ′ is a subspace of L2(G,π) by Lemma 4.1.
Hence, L2(G,π) is the sum of all subrepresentations of (L2(G), r) which
are equivalent to π.
53
Definition 4.2. Let π be a finite dimensional irreducible representation
of a compact group G. The space L2(G,π) consisting of the sum of all
subspaces of the right regular representation which are equivalent to π is
called the isotypical component of π in L2(G).
Note that a function f ∈ L2(G) belongs to L2(G,π) precisely when the
right translates of f generate an invariant subspace (of the right regular
representation) equivalent to a finite multiple of π (that is, a finite
direct sum of subrepresentations equivalent to π).
54
We shall now prove that the dimension of an isotypical component
L2(G,π) is exactly (dim π)2.
55
Recall the G-morphism c : End(V)→ L2(G) , A 7→ cA := Tr(Aπ). The
fact that cA 6= 0 for A 6= 0 will be deduced from a computation of the
quadratic norm of these coefficient functions. It is easier to start with
the case of rank ≤ 1 linear mappings. We use the isomorphisms
V ⊗ V −→ End(V ) (V = dual of V )
defined as follows: If u ∈ V and v ∈ V , the operator corresponding to
u⊗ v is
u⊗ v : x→ u(x)v = 〈u, x〉v
56
The image of u⊗ v consists of multiples of v, and u⊗ v has rank 1when u and v are non-zero (quite generally, decomposable tensors
corresponding to operators of rank ≤ 1). The coefficient cA with respect
to the operator A = u⊗ v coincides with the previously defined
coefficient
cuv = 〈u,π(x)v〉 = cu⊗v(x)
57
Lemma 4.3. Let π and σ be two representations of a compact group G
and A : Vπ → Vσ be a linear mapping. Then
A\ =∫
G
σ(g)Aπ(g)−1dg
is a G-morphism from Vπ to Vσ , namely A\ ∈ HomG(Vπ , Vσ).
Proof. We have
A\π(s) =∫
σ(g)Aπ(g)−1π(s)dg =∫
σ(g)Aπ(s−1g)−1dg
and replacing g by sg (i.e. s−1g by g)
A\π(s) =∫
σ(sg)Aπ(g)−1dg = σ(s)A\
58
Thus the averaging operation (given by the Haar integral) of Lemma 4.3
leads to a projector
\ : Hom(Vπ , Vσ)→ HomG(Vπ , Vσ) , A→ A\
In particular when π and σ are disjoint, i.e HomG(Vπ , Vσ) = 0, we
must have A\ = 0. This is certainly the case when π and σ are
non-equivalent irreducible representations (Schur’s lemma). Another
case of special interest is π = σ, finite dimensional and irreducible.
Schur’s lemma gives HomG(Vπ , Vσ) = C and thus A = λAid is a scalar
operator.
59
Proposition 4.4. If π is a finite dimensional irreducible representation
of the compact group G in V , the projector
End(V )→ EndG(V ) = C id, A→ A\ = λA id,
is given explicitly by the following formula:
A\ =∫
G
π(g)Aπ(g)−1dg =Tr(A)dimV
idV
Proof. Since we know a priori that the operator A\ is a scalar operator
λAid, we can determine the value of the scalar by taking traces in the
defining equalities
λATr(idV ) = Tr
(∫G
π(g)Aπ(g)−1dg
)=∫
G
Tr(π(g)Aπ(g)−1
)dg
=∫
G
Tr(A)dg = Tr(A)
60
Theorem 4.5 (Schur’s orthogonality relations). Let G be a compact
group and π,σ be a two finite dimensional irreducible representations of
G. Assume that π and σ are unitary. Then
(a) If π and σ are non-equivalent, L2(G,π) and L2(G, σ) are
orthogonal in L2(G).(b) If π and σ are equivalent, L2(G,π) = L2(G,σ) and the inner
product of the two coefficients of this space is given by
〈cuv , cxy〉 =∫
G
〈u,π(g)v〉〈x,π(g)y〉dg = 〈u, x〉〈v, y〉/dimV
(c) More generally in the case π = σ, the inner product of general
coefficients is given by
〈cA, cB〉 =∫
G
Tr(Aπ(g))Tr(Bπ(g))dg = Tr(A∗B)/dimV
61
Proof. (a) follows from Lemma 4.3 and (b) follows similarly from
Proposition 4.4. It will be enough to show how (b) is derived. For this
purpose, we consider the particular operators v ⊗ y (v ∈ Vπ , y ∈ Vπ)
and apply the result of the proposition∫G
π(g)(v ⊗ y)π(g)−1dg =Tr(v ⊗ y)
dim VidV =
〈v, y〉dimV
idV
Let us apply this operator to the vector u and take the inner product
By the invariance of the Haar measure, the operator B commutes with
all the operators σ(x).
109
Hence, if we decompose σ into isotypical components
σ ∼=⊕πnππ ∼=
⊕π
π ⊗ 1nπ
the operator B will have the form
B =⊕
iddim π ⊗Bπ
and
Bσ(x) = σ(x)B ∼=⊕
π ⊗Bπ
Tr(Bσ(x)) =∑
aπχπ(x) (aπ = Tr(Bπ))
This shows that
|f(x)−∑finite
aπχπ(x)| < ε
110
Theorem 6.2. Let π and σ be two finite dimensional representations of
a compact group G with respective characters χπ and χσ . Then
〈χπ , χσ〉 = dim HomG(Vπ , Vσ)
111
Proof. By Lemma 4.3, we know that the integral∫π(x)⊗ σ(x)dx
is an expression for the projector
\ : Vπ ⊗ Vσ −→ (Vπ ⊗ Vσ)G
Hom(Vπ , Vσ) −→ HomG(Vπ , Vσ)
The dimension of the image space is the trace of this projector. Thus
〈χπ , χσ〉 =∫χπ(x)χσ(x)dx = dim HomG(Vπ , Vσ)
112
Corollary 6.3. Let π be a finite dimensional representation of G. Then
π is irreducible ⇐⇒ ||χπ ||2 =√〈χπ , χπ〉 = 1
Corollary 6.4. Let π,σ ∈ G. Then
〈χπ , χσ〉 = δπσ (= 1 if π is equivalent to σ, 0 otherwise)
113
Corollary 6.5. Let σ be a finite dimensional representation of G and
σ =⊕
I nππ (summation over a finite subset I ⊆ G) be a
decomposition into irreducible components. Then
(a) nπ = 〈χπ , χσ〉 is well determined
(b) ||χσ ||2 =∑
I n2π
Proof. Let Vσ =⊕
(Vτ ⊗ Cnτ ). Since each G-morphism Vσ → Vπ
must vanish on all isotypical components Vτ ⊗ Cnτ where τ is not
equivalent to π, we have
HomG(Vσ , Vπ) = HomG(Vπ ⊗ Cnπ , Vπ)
= Cnπ ⊗ HomG(Vπ , Vπ) = Cnπ
This proves assertion (a). Moreover (b) follows from Pythagoras
theorem and Corollary 6.3
114
Corollary 6.6. The set of characters (χπ)π∈G is an orthonormal basis
of the Hilbert space L2(G)inv. Every function f ∈ L2(G)inv can be
expanded in the series
f =∑G
〈χπ , f〉χπ (convergence in L2(G))
115
Theorem 6.7. Let G be a compact group. For π ∈ G, let Pπ denote
the projector L2(G)→ L2(G,π) onto the isotypical component of π (in
the right regular representation). Then Pπ is given by the convolution
with the normalized character ϑπ = dim π · χπ
Pπ : f → fπ = Pπf = f ∗ ϑπ
Proof. We have already seen that
fπ(x) = dim π · Tr(π(f)π(x))
Thus
fπ(x) = dim π · Tr(∫
f(y−1)π(y)π(x)dy)
= dim π ·∫f(y)Tr(π(y−1x))dy = f ∗ ϑπ(x)
116
Exercise 6.8. Let H1 and H2 be two Hilbert spaces. Prove that for any
operator A in H1 ⊗H2 which commutes to all operators T ⊗ 1T ∈ End(H1) can be written in the form 1⊗B for some B ∈ End (H2).(Hint: Introduce an orthonormal basis (ei) of H1 and write A as an
matrix of blocks with respect to this basis
A(ej ⊗ x) =∑
i
ei ⊗Aijx (Ai
j ∈ End(H2)).
Using the commutations (Pj ⊗ 1)A = A(Pj ⊗ 1) where Pj is the
orthogonal projector on Cej , conclude that Aij = 0 for i 6= j. Finally,
using the commutation relations of A with the operators Uji ⊗ 1
Uji(ei) = ej , Uji(ek) = 0 for k 6= i,
conclude that Aii = B ∈ End(H2) is independent of i.)
117
Exercise 6.9. Check that the formula
f =∑〈χπ , f〉χπ
coincides with the Fourier inversion formula.
118
7 Induced representations and
Frobenius-Weil reciprocity
Suppose that K is a closed (hence compact) subgroup of G.
Recall that K\G is the space of right cosets Kg, g ∈ G.
119
Suppose for the moment that K\G is finite, i.e.
K\G = {Kg1, . . . ,Kgn} for some n so that G is the disjoint union of
Kg1, . . . ,Kgn.
For each s ∈ G we have an associated permutation π(s) of {1, . . . , n}that sends i to the unique j with Kgis
)sorted, so by induction on n (or on the number of rows)
we have (P ′, Q′) = (Q, P ) and the proof follows.
266
Corollary 9.40. Let A be a N-matrix of finite support, and let
ARSK→ (P,Q). Then A
RSK→ (P,Q). Then A is symmetric i.e. (A = At)if and only if P = Q.
Proof. Immediete from the fact that At RSK→ (Q,P ).
267
Corollary 9.41. Let A = At and ARSK→ (P, P ), and let
α = (α1, α2, . . .) where αi ∈ N and∑αi <∞. Then the map A 7→ P
establishes a bijection between symmetric N-matrices with row(A) = α
and SSYTs of type α.
Proof. Follows from Corollary 9.40 and Theorem 9.30.
268
Corollary 9.42. We have
1∏i(1− xi) ·
∏i<j(1− xixj)
=∑
λ
sλ(x) (9.40)
summed over all λ ∈ Par.
Proof. The coefficient of xα on the left side is the number of symmetric
N-matrices A with row(A) = α while the coefficient of xα on the right
hand side is the number of SSYTs of type α. Now apply Corollary
9.41.
269
Corollary 9.43. We have∑λ`n
fλ = #{w ∈ Sn : w2 = 1}
the number of involutions in Sn.
Proof. Let w ∈ Sn and wRSK→ (P,Q) where P and Q are SYT of the
same shape λ ` n. The permutation matrix corresponding to w is
symmetric if and only if w2 = 1. By Theorem 9.37 this is the case if and
only if P = Q, and the proof follows.
Alternatively, take the coefficient of x1, · · ·xn on both sides of
(9.40)
270
9.12 The dual RSK Algorithm
There is a variation of the RSK algorithm that is related to the product∏(1 + xiyj) in the same way that the RSK algorithm itself is related to∏(1− xiyj)−1. We call this variation the dual RSK algorithm and
denote it by ARSK∗
→ (P,Q). The matrix A will now be a (0, 1) matrix of
finite support. Form the two line array wA just as before. The RSK∗
algorithm proceeds exactly like the RSK algorithm, except that an
element i bumps the leftmost element ≥ i, rather than the leftmost
element > i. (In particular, RSK and RSK∗ agree for permutation
matrices.) It follows that each row of P is strictly increasing.
271
Theorem 9.44. The RSK∗ algorithm is a bijection between
(0, 1)-matrices A of finite support and pairs (P,Q) such that P t (the
transpose of P ) and Q are SSYTs with sh(P ) = sh(Q). Moreover,
col(A) = type(P ) and row(A) = type(Q).
272
Theorem 9.45. We have∏i,j
(1 + xiyj) =∑
λ
sλ′(x)sλ(y)
273
Lemma 9.46. Let ωy denote ω acting on the y variables only (so we
regard the xi’s as constants commuting with ω). Then
ωy
∏(1− xiyj)−1 =
∏(1 + xiyj)
Proof. We have
ωy
∏(1− xiyj)−1 = ωy
∑λ
mλ(x)hλ(y) ( by Proposition 9.7)
=∑
λ
mλ(x)eλ(y) ( by Theorem 9.8 )
=∏
(1 + xiyj) ( by Proposition 9.3 )
274
Theorem 9.47. For every λ ∈ Par we have
ωsλ = sλ′
Proof. We have∑λ
sλ(x)sλ′(y) =∏
(1 + xiyj) ( by Theorem 9.45)
= ωy
∏(1− xiyj)−1 ( by Lemma 9.46)
= ωy
∑λ
sλ(x)sλ(y) ( by Theorem 9.32)
=∑
λ
sλ(x)ωy(sλ(y))
Take the coefficient of sλ(x) on both sides. Since the sλ(x)’s are linearly
independent, we obtain sλ′(y) = ωy(sλ(y)), or just sλ′ = ωsλ.
275
9.13 The Classical definition of the Schur functions
Let α = (α1, α2, . . . αn) ∈ Nn and w ∈ Sn. As usual write
xα = xα11 · · ·xαn
n and define
w(xα) = xαw(1)1 · · ·xαw(n)
n
Now define
aα = aα(x1, . . . , xn) =∑
w∈Sn
εww(xα) (9.41)
where
εw =
1 if w is an even permutation
−1 if w is an odd permutation
276
Note that the right-hand side of equation (9.41) is just the expansion of
a determinant, namely
aα = det(xαj
i )ni,j=1
Note also that aα is skew-symmetric, i.e. w(aα) = εwaα, so aα = 0unless all the αi’s are distinct. Hence assume that
α1 > α2 > · · · > αn ≥ 0, so α = λ+ δ, where λ ∈ Par, l(λ) ≤ n, and
δ = δn = (n− 1, n− 2, . . . , 0). Since αj = λj + n− j, we get
aα = aλ+δ = det(x
λj+n−ji
)n
i,j=1(9.42)
277
For instance,
a421 = a211+210 =
∣∣∣∣∣∣∣∣x4
1 x21 x1
1
x42 x2
2 x12
x43 x2
3 x13
∣∣∣∣∣∣∣∣Note in particular that
aδ = det(xn−ji ) =
∏1≤i<j≤n
(xi − xj) (9.43)
the Vandermonde determinant. If for some i 6= j we put xi = xj in aα,
then because aα is skew-symmetric (or because the i-th row and j-th
row of the determinant (9.42) become equal), we obtain 0.
278
Hence aα is divisible by xi − xj and thus by aδ (in the ring
Z[x1, . . . xn]). Thus aα/aδ ∈ Z[x1, . . . , xn]. Moreover, since aα and aδ
are skew-symmetric, the quotient is symmetric, and is clearly
homogeneous of degree |α| − |δ| = |λ|. In other words, aα/aδ ∈ Λ|λ|n .
279
Theorem 9.48. We have
aλ+δ/aδ = sλ(x1, . . . , xn)
280
Proof. There are many proofs of this result. We give one that can be
extended to give an important result on skew Schur functions(Theorem
9.51).
Applying ω to 9.36 and replacing λ by λ′ yields
eµ =∑
λ
Kλ′µsλ
Since the matrix (Kλ′µ) is invertible, it suffices to show that
eµ(x1, . . . , xn) =∑
Kλ′µaλ+δ
aδ
or equivalently (always working with n variables),
aδeµ =∑
λ
Kλ′µaλ+δ (9.44)
281
Since both sides of (9.44) are skew-symmetric, it is enough to show that
the coefficient of xλ+δ in aδeµ is Kλ′µ. We multiply aδ by eµ by