-
AUTOMORPHIC L-FUNCTIONS
SALIM ALI ALTUĞ
NOTES TAKEN BY PAK-HIN LEE AND REVISED BY SALIM ALI ALTUĞ
Abstract. Here are the notes I took for Salim Ali Altuğ’s
course on automorphic L-functions offered at Columbia University in
Spring 2015 (MATH G6675: Topics in NumberTheory). Hopefully these
notes will be completed by Spring 2016. I recommend that youvisit
my website from time to time for the most updated version.
Due to my own lack of understanding of the materials, I have
inevitably introducedboth mathematical and typographical errors in
these notes. Please send corrections andcomments to
[email protected] and [email protected].
Contents
1. Lecture 1 (January 20, 2015) 31.1. Introduction 31.2. Modular
forms 32. Lecture 2 (January 22, 2015) 72.1. Last time 72.2.
Spectral theory 7
The following has not been revised by Ali yet. Use with
caution!3. Lecture 3 (January 29, 2015) 123.1. Last time 123.2.
Real-analytic Eisenstein series 133.3. A (classical) computation
143.4. Why is the L2 theory helpful? 154. Lecture 4 (February 3,
2015) 164.1. Last time 164.2. Analytic continuation 174.3.
Application: Prime number theorem 184.4. Application: Gauss class
number problem 184.5. Application: Volumes of fundamental domains
195. Lecture 5 (February 5, 2015) 215.1. Convergence of averaging
215.2. Reminder (last time) 215.3. Modular forms as Maass forms
215.4. From H to group 225.5. To adelic group 235.6. Eisenstein
series (adelically) 24
Last updated: November 19, 2015.1
http://www.math.columbia.edu/~altug/http://math.columbia.edu/~altug/Topics_in_NT/webpage/Topics_in_NT.htmlhttp://www.math.columbia.edu/~phlee/mailto:[email protected]:[email protected]
-
6. Lecture 6 (February 10, 2015) 266.1. Last time 266.2.
Eisenstein series (adelic) 266.3. Constant term 277. Lecture 7
(February 12, 2015) 317.1. Algebraic group theory 317.2. Roots,
weights, etc. 348. Lecture 8 (February 17, 2015) 378.1. Root
systems 378.2. Parabolic (sub)groups 388.3. Decompositions of
parabolics 409. Lecture 9 (February 19, 2015) 429.1. Last time
429.2. Character spaces 469.3. The Hp function 4610. Lecture 10
(February 24, 2015) 4910.1. Last time 4910.2. More on parabolics
4910.3. Parabolic induction 5010.4. Eisenstein series 5211. Lecture
11 (February 26, 2015) 5311.1. Last time 5311.2. A concrete example
5311.3. Intertwining 5411.4. Decomposition of L2(G(F )\G(A))
5411.5. Drawings 5512. Lecture 12 (March 3, 2015) 5612.1.
Integration 5612.2. Decomposition of Haar measures 5712.3.
Applications 5812.4. Convergence of Eisenstein series 5912.5. A
little bit of reduction theory 6012.6. Back to Eisenstein series
61
2
-
1. Lecture 1 (January 20, 2015)
1.1. Introduction. One central theme in number theory is
automorphic L-functions. Inthe 1960’s it was unclear what a general
L-function is supposed to be; there were certainlythe Riemann zeta
function ζ(s), L(s, χ) due to Hecke et al., Artin L-functions, and
alsosuggestions from Tamagawa. My goal is to show how automorphic
L-functions arose since1967. The whole theory was born from the
Eisenstein series, particularly their constantterms, and was really
a coincidence, since Langlands was studying these constant terms
justto kill time (more on this later).
I will go over Langlands’ book Euler Products1, which was the
manuscript written on hislectures at Yale in 1967. In order to talk
about this, we need to define a lot of things. Onewarning is that
this book is not easy to read, as is anything that Langlands wrote.
I willkeep updating the list of references on my website.
This theory was later known as the Langlands–Shahidi method, and
was used to provefunctoriality of Sym3 and Sym4 of cuspidal
automorphic representations of GL(2). Onedownside of this method is
that it is limited; this will be made precise later.
I will try to keep things self-contained. The theory of
Eisenstein series involves:
• reductive groups (roots, weights, parabolics, etc.),•
representation theory of local groups (unramified principal series,
Satake isomor-
phism, Langlands’ interpretation as L-groups),• reduction theory
(Borel–Harish-Chandra),• computation.
This topic can be pretty dry if we just want to go over these
things, so I will give moreexamples. There is also one more
blackbox, which is the analytic continuation of Eisensteinseries.
Essentially this is the heart of the matter.
If we can cover all of this, I am open to suggestions.
1.2. Modular forms. A reference for this is Serre’s A Course in
Arithmetic.Historically, modular forms arose from elliptic
integrals in the 1800’s. These are at the
moment irrelevant to L-functions. Elliptic integrals gave rise
to elliptic curves and ellipticfunctions, and these gave rise to
modular forms. I am not going to write down ellipticintegrals, but
they are essentially integrals over elliptic curves. Let me start
with ellipticfunctions, which came from trying to invert elliptic
integrals.
An elliptic function is a meromorphic function f : C→ C such
that there exist w1, w2 ∈ C,w1/w2 6∈ R with f(w1 + z) = f(w2 + z) =
f(z). There are two names attached to thesefunctions: Weierstrass
(1815–1897) and Jacobi (1804–1851).
• Weierstrass attached to any given any lattice Λ the
function
℘Λ(z) =∑
w∈Λ\{0}
(1
(z − w)2− 1w2
)+
1
z2
which is meromorphic and periodic.• Jacobi came before
Weierstrass, and literally took the elliptic integral
u =
∫ φ0
dt√1− k2 sin2 t
1Available at
http://publications.ias.edu/sites/default/files/ep-ps.pdf.3
http://publications.ias.edu/sites/default/files/ep-ps.pdf
-
where 0 < k2 < 1. This is an elliptic integral of second
kind. Jacobi’s first thetafunction (there are a total of twelve) is
defined by
sn(u) = sin(φ).
It is a fact that this is an elliptic function, with
sn(u+ 2mK + 2niK ′) = sn(u)
where
K =
∫ π2
0
dt√1− k2 sin2 t
and K ′ =
∫ π2
0
dt√1− (1− k2) sin2 t
are the complete elliptic integrals.
I introduced Jacobi’s function only for fun, but Weierstrass’
function will be useful. Oneimportant property of ℘Λ(z) is
(℘′Λ(z))2 = 4℘3Λ(z)− 60G2(Λ)℘Λ(z)− 140G3(Λ).
This is actually bad notation since in a second we will be using
the z ∈ H for the variable ofGk (the translation is by writing Λ =
ω1Z + ω2Z such that ω1/ω2 ∈ H, then Λ correspondsto z = ω1/ω2 and
vice versa), but we will change this in a second. Note that (℘Λ(z),
℘
′Λ(z))
gives an explicit point on the elliptic curve
y2 = 4x3 − 60G2x− 140G3.
Definition 1.1 (Eisenstein series). For k ∈ Z, the holomorphic
Eisenstein series of weight2k is defined as
Gk(z) =∑
(m,n)∈Z2\{(0,0)}
1
(mz + n)2k.
This converges for k > 1. Given a lattice Λ, we get a point z
∈ H. By Gk(Λ) we mean Gkevaluated at z.
Here is a fun fact: if we set
u =
∫ ∞y
ds√4s3 − 60G2s− 140G3
,
then y = ℘(u).One property of Gk is that
Gk(z + 1) = Gk(z),
so we can write a Fourier series expansion
Gk(z) =∑n
anqn
where q = e2πiz. The whole class is essentially about
calculating a0 for general Eisensteinseries. For holomorphic
Eisenstein series, the constant term a0 is actually a constant.
Whenwe introduce non-holomorphic Eisenstein series, the constant
term will depend on y: thegeneral Fourier expansion looks like
∑
n
anfn(y)e2πinx.
4
-
Here is a word on the broader picture. In case you have seen
this before, the whole theorydigresses into two branches:
arithmetic and analytic. These holomorphic Eisenstein seriesare
arithmetic objects. But they do not appear in L2(Γ\G) and have no
place in the spectraltheory, which originated in the 1950’s.
Let us carry out an ad hoc calculation of the Fourier expansion.
Recall that
π cotπz =1
z+∞∑m=1
(1
z +m+
1
z −m
).
Since
cotx =cos(x)
sin(x)= i
eix + e−ix
eix − e−ix,
we get
cot πz = i
(1 +
2
q − 1
)= i
(1− 2
∞∑n=0
qn
).
On the other hand, note that
d2k−1
dz2k−1π cotπz =
(−1)2k−1(2k − 1)!z2k
+ (−1)2k−1(2k − 1)!∑( 1
(z +m)2k+
1
(z −m)2k
)= (−1)2k−1(2k − 1)!
∑m∈Z
1
(z +m)2k.
By the q-expansion, we have
(−2πi)2k
(2k − 1)!
∞∑n=0
n2k−1qn =∑m
1
(m+ z)2k.
This is so ad hoc it appears to come out of nowhere.Let us go
back to Gk.
Gk(z) =∑m,n
′ 1
(mz + n)2k
= 2ζ(2k) +∑m6=0,n
1
(mz + n)2k
= 2ζ(2k) + 2∞∑m=1
∑n
1
(mz + n)2k
= 2ζ(2k) + 2∞∑m=1
(−2πi)2k
(2k − 1)!
∞∑a=0
a2k−1qam
= 2ζ(2k) +2(2πi)2k
(2k − 1)!
∞∑α=1
σ2k−1(α)qα
where σt(α) =∑
d|α αt is the classical divisor sum function.
Here are some reasons for writing this down. Firstly, the
special values of the zeta functionappear in the constant terms.
For non-holomorphic Eisenstein series, we do get a zeta
5
-
function as the constant term, rather than just special values.
Secondly,
σ2k−1(α)qα = σ2k−1(α)e
πiαxe−παy,
where σ2k−1(α) is the divisor function, eπiαx is a function on
the unit circle, and e−παy is one
of the solutions to d2fdy2
= f (up to a constant). Classically over the Euclidean plane,
d2fdy2
= f
has two solutions: ey and e−y. We see e−y in the Eisenstein
series. This is a common theme:the Fourier coefficients of general
Eisenstein series will satisfy certain differential equations.Only
some of those solutions will appear.
This is the state of matters at the beginning of the 20th
century. Things were case-by-caseat this point.
Definition 1.2 (Modular function). A modular function of weight
2k for SL2(Z) is a functionf : H→ C such that
• f((
a bc d
)z
)= (cz + d)2kf(z) for all
(a bc d
)∈ SL2(Z).
• f is meromorphic.
Definition 1.3 (Modular form). A modular form is a modular
function that is holomorphiceverywhere. (By holomorphic at ∞, we
mean that the q-expansion should not have anynegative powers.)
Definition 1.4 (Cusp form). A cups form is a modular form with
zeroth coefficient inq-expansion 0.
Example 1.5.
• Gk is a modular form.• ∆(z) = (60G2(z))3 − 27(140G3(z))2 is a
modular form of weight 12.• j(z) = 1728(60G2(z))
3
∆(z)is a modular function of weight 0, but not a modular form.
It
has a simple pole with residue 1 at ∞.
Moving on, there were Siegel modular forms, Jacobi theta
functions, Hecke’s generalizationof Riemann’s work for higher
number fields, Hilbert’s (and Blumenthal’s) generalization ofthe
notion of modular forms over upper half spaces, and many other
names over 1900–1940.And then came Maass (1949), a student of Hecke
who worked on non-holomorphic modularforms. He studied the L2
theory, which was the introduction of spectral theory to the
studyof modular forms. Of course, there was the Petersson inner
product before Maass, but thatwas limited to cusp forms only.
The Petersson inner product is defined as
〈f, g〉SL2(Z) =∫
SL2(Z)\Hf(z)g(z)yk
dx dy
y2
for f and g of weight k. In order for this to be well-defined,
at least one of f or g has to becuspidal. I did not go through the
spectral theory for holomorphic forms, but keep in mindthat the
Eisenstein series and cusp forms span all the holomorphic modular
forms. Whenwe multiply f and g, all the cross terms except for the
zeroth term are integrable, since qgrows as e−y.
6
-
The spectral theory involves a Hilbert space and a measure. Next
time it will be clearwhy we do this. Let us work with the specific
case of L2(SL2(Z)\H, µ), where
dµ =dx dy
y2
is the volume element of the classical hyperbolic metric. (For
now we are defining thesespaces for weight 0. We note that one can
extend these to all K-types for a maximalcompact subgroup K, i.e.
for all weights k.)
In the case of Eisenstein series, we could write down all the
Fourier coefficients. But forcusp forms, I did not write any down
because they are difficult. The important point is thatthe cuspidal
part of L2 is the same as the cuspidal part of automorphic
forms.
Next time I will talk more about the classical setting,
introducing the Laplacian ∆, spectraltheory, the non-holomorphic
Eisenstein series and their constant terms. Then we will passto the
adelic setting and groups.
2. Lecture 2 (January 22, 2015)
2.1. Last time. Last time was an introduction to modular forms.
We went back to the1800’s and talked about holomorphic modular
forms, which are roughly speaking holomor-phic functions on the
upper half plane that are modular with respect to the modular
group.In some sense these are just “periodic” functions. We defined
the Eisenstein series
Gk(z) :=∑
(c,d)6=0
1
(cz + d)2k
and computed their Fourier expansions
Gk(z) = 2ζ(2k) +2(2πi)2k
(2k − 1)!
∞∑n=1
σ2k−1(n)qn
where q = e2πiz = e2πixe−2πy.Then Maass came and introduced the
L2 theory; Maass forms came up in 1949. This
marked the introduction of spectral theory to the subject,
initiated by Maass and Selberg.Spectral theory means we have a
Hilbert space with a measure and we try to diagonalizecertain
operators.
2.2. Spectral theory. Let Γ = SL2(Z), and consider the Hilbert
space L2(Γ\H, dµ) wheredµ = dx dy
y2is the invariant measure on H and the inner product is given
by
〈f, g〉 =∫
Γ\Hf · g dµ.
We will keep doing things classically for another lecture before
moving to the adelic setting.We will see that the formula for dµ
essentially comes from the Iwasawa decomposition.
Let ∆ be the Laplacian, which in these coordinates is normalized
as
∆ = ∆0 = −y2(∂2
∂x2+
∂2
∂y2
).
Remark. This is the theory for weight 0 functions (i.e.,
K-invariant on the right).7
-
Let us analyze ∆ on H a bit. Eventually we will see that
holomorphic modular forms (moreprecisely yk times a holomorphic
form of weight 2k) and Maass forms are eigenfunctions for∆ (and its
extensions for all weights k), and it is important to first study
its eigenvalues.This gives motivation for the Selberg’s eigenvalue
conjecture and the Ramanujan conjecture.
Spectral theory is nice if we have a self-adjoint operator. For
infinite-dimensional spaces,we need to be a bit careful because
differentiation is in general unbounded. Basically, theidea of
self-adjointness is not straightforward, and we need to look at a
dense subspace.Some properties of ∆ are:
(1) symmetric (i.e., 〈∆f, g〉 = 〈f,∆g〉),(2) non-negative (i.e.,
〈∆f, f〉 ≥ 0).
Proof. These are very easy, and essentially just integration by
parts. Let f, g ∈ C∞0 (H).Then
〈∆f, g〉 =∫H
∆f · g dx dyy2
=
∫H∇f · ∇g dx dy
where ∇f = (fx, fy). �
This is a straightforward application of integration by parts,
but immediately shows thefollowing consequences. Remember we are
trying to understand the eigenvalues, λ, of ∆. Atthe beginning,
there is no reason λ has any restrictions, but because of (1) and
(2) above wehave:
• Eigenvalues are real.• Eigenvalues are non-negative.
In fact more is true.
Proposition 2.1. If λ is an eigenvalue of ∆ on H, then λ ≥
14.
This 14
will appear everywhere and is an interesting object.
Proof. Without loss of generality assume F is real-valued and ∆F
= λF ; otherwise we canjust separate the real and imaginary parts
of F (this is to avoid complex conjugations). Thenintegration by
parts gives
〈∆F, F 〉 = −∫ (
y2(∂2
∂x2+
∂2
∂y2
)F
)· F dx dy
y2
=
∫(F 2x + F
2y ) dx dy ≥
∫F 2y dx dy (1)
and ∫F 2
dy
y2= 2
∫F · Fy
dy
y.
The second relation implies∫F 2
dx dy
y2= 2
∫∫F · Fy
dx dy
y≤(
4
∫F 2y dx dy
) 12
||F ||
by the Cauchy–Schwarz inequality, so
||F ||2 ≤ 4∫F 2y dx dy. (2)
8
-
By (1) and (2), we conclude
〈∆F, F 〉 ≥ 14〈F, F 〉. �
This is the story for H. What usually happens is that for a big
space without any discretegroup action, it is easy to determine
eigenvalues: here we have the half line [1
4,∞), which is a
continuous spectrum. Once we take the quotient Γ\H, we will get
discrete eigenvalues, whichcorrespond to Maass forms. Γ\H is like
the global (real) picture. We can also localize andconsider
PGL2(Qp)/PGL2(Zp), which is like a tree, and we can try to analyze
the spectrumof ∆p. This is a pure spectrum and everything is
continuous, but once we take quotientswe will get something
discrete. Selberg’s eigenvalue conjecture states that the same
boundλ ≥ 1
4holds even after we pass to the quotient Γ\H.
Now let us move to the eigenfunctions ∆f = λf over H. Recall
that
∆ = −y2(∂2
∂x2+
∂2
∂y2
).
Step 1: Look at eigenfunctions which only depend on y. So we are
looking for functions
F (x, y) = φ(y)
such that
∆F = −y2 ∂2
∂y2φ = λφ.
There are some obvious eigenfunctions: φ(y) = ys and y1−s
(linearly independent), wheres(1− s) = λ and s 6= 1
2. If s = 1
2, then we can consider the solutions {ys, ys log(y)}.
Let us parametrize λ = s(1 − s). Then λ ∈ R and λ ≥ 14. These
functions are the
building blocks of real-analytic Eisenstein series, and will
also come up when we study theintertwining operators.
Step 2: Assume that the eigenfunction is periodic in x. This is
because if we want to geteigenfunctions on Γ\H, then they should be
invariant under the action
f
((1 10 1
)z
)= f(z).
Then we can write F (z) = e(x)φ(2πy), where e(x) = e2πix,
and
∆F = −y2(∂2
∂x2+
∂2
∂y2
)e(x)φ(2πy)
= −y2(∂2
∂x2e(x)
)φ(2πy)− y2e(x)φ′′(2πy)
= y24π2e(x)(φ(2πy)− φ′′(2πy)).In order for ∆F = λF , we want
φ′′(2πy) + φ(2πy)
(λ
(2πy)2− 1)
= 0.
This is Bessel’s differential equation of second type. The
upshot is, people have studiedthese. We have the following
facts:
(1) There are two linearly independent solutions: (i)√
2π−1yKs− 12(y), (ii)
√2πyIs− 1
2(y).
9
-
(2) Their behavior is: (i) ∼ e−y and (ii) ∼ ey as y →∞. (To
convince oneself informally,consider the equation for y →∞. Then
φ′′ = λφ has solutions ey and e−y.)
Remember that we are trying to work on L2(Γ\H, dµ). For harmonic
analysis on this,only Ks− 1
2is relevant. From the above discussion we get the following
proposition.
Proposition 2.2. Any F ∈ L2(Γ\H) that is an eigenfunction of ∆
with eigenvalue s(1− s)has a Fourier–Whittaker expansion∑
n∈Z
an√yKs− 1
2(2π|n|y)e(xn)
where an ∈ C.
Rough proof. Let F (z) = F (x+ iy) be such a function. Since
(1 10 1
)∈ Γ we know that the
function is periodic in the x-variable and has a Fourier
expansion of the form
F (z) =∑n∈Z
a(n, y)e(xn)
Since F (z) is an eigenfunction each a(n, y) satisfies the
Bessel differential equation, therefore
the solutions are expressible as a linear combination of√
2π−1yKs− 12(y) and
√2πyIs− 1
2(y).
Finally since F is in L2 only K-Bessel function appears. �
Let us go back to Γ\H and Maass forms. The whole point of the
above discussion is thatthe eigenfunctions of ∆ on H give us
building blocks of these functions, and their eigenvaluessatisfy
certain properties.
We want functions f : H→ C which are• invariant under SL2(Z),•
eigenfunctions of ∆,• of moderate growth (i.e., f should grow at
most polynomially at ∞).
Invariance under SL2(Z) implies invariance under x 7→ x+1, so F
has a Fourier expansion.There are various ways of cooking up such
eigenfunctions, but the best way is following yournose: averaging.
Start with φ, an eigenfunction on H that is invariant under x 7→ x+
1, andmake it invariant under the rest of the group by
averaging.
Example 2.3. Let ±Γ∞ =〈±(
1 11
)〉=
{±(
1 n0 1
) ∣∣∣∣ n ∈ Z} be the stabilizer of y.Define φ̃s(z) = (Im z)
s = ys, and
Ẽ(z; s) =∑
±Γ∞\SL2(Z)
φ̃s(γz).
This is the non-normalized non-holomorphic Eisenstein
series.
Note that ∆Ẽ(z; s) = s(1− s)Ẽ(z; s). We can
rewrite∑±Γ∞\SL2(Z)
φ̃s(γz) =∑
(c,d)∈Z2gcd(c,d)=1
c≥0
ys
|cz + d|2s.
10
-
Once we pass to the group-theoretic language, these things will
be canonically defined, butfor now let us do an ad hoc calculation.
We have
Im(γz) =Im(z)
|cz + d|2
and the bijectionΓ∞\ SL2(Z)←→ {(c, d) | gcd(c, d) = 1, c ≥
0}.
This is because if
(a bc d
)∼(a′ b′
c d
), then ad−bc = a′d−b′c = 1 and (a−a′)d+(b′−b)c = 0.
These calculations are easy to do for GL(2).I will do the
following normalization
φs(z) = (Im z)s+ 1
2 ,
so that
∆E(z; s) =
(1
4− s2
)E(z; s).
Note that the above equality is true only for those s so that
E(z, s) makes sense, and wethen get an eigenfunction with
eigenvalue 1
4− s2.
Note that for any eigenfunction that is actually in L2, by the
previous calculation, theeigenvalue has to satisfy 1
4− s2 ∈ [0,∞), and therefore s ∈ iR ∪ [0, 1
2].
Selberg’s eigenvalue conjecture says that for a Maass cusp form,
the eigenvalue cannotbe in [0, 1
2). These eigenvalues parametrize representations of GL(2): iR
corresponds to the
principal series and [0, 12) corresponds to the complementary
series. Thus Selberg says the
representations coming from Maass forms have to be in the
unitary principal series.Let me finish by stating some properties
of these eigenfunctions, and we will prove them
next time.
(1) E(z; s) converges for Re(s) > 12.
(2) E(γz; s) = E(z; s) for all γ ∈ SL2(Z).(3) ∆E(z; s) = (1
4− s2)E(z; s).
(4) E(z; s) has analytic continuation and a functional equation
relating s↔ −s.(5) E(z; s) has the Fourier expansion
E(z; s) =∑n6=0
an√yKs(2π|n|y)e(nx) + a0(y),
where
an =4|n|sσ−2s(|n|)ζ(2s+ 1)
and
a0 = ys+ 1
2 +ξ(2s)
ξ(2s+ 1)· y
12−s
with ξ(s) the completed Riemann zeta function.
The −2s in the divisor function dictates the eigenvalue. The
ξ(2s)ξ(2s+1)
in a0 is the completed
ζ-function. Once we know the analytic continuation of ζ
(classically obtained using integralrepresentation, Poisson
summation and partial summation formula), we get the
analyticcontinuation of E(z; s).
11
-
Spectral theory reverses this argument: we would like to start
from the analytic contin-uation of E(z; s), and deduce from this
the analytic continuation of ζ! This works moregenerally for GL(n)
and Rankin products of L-functions, which appear as constant terms
ofEisenstein series. This is the upshot for the whole class. We try
to do this for all groups.
Next time we will calculate a0, and obtain analytic continuation
and functional equationfrom a very soft spectral argument.
3. Lecture 3 (January 29, 2015)
3.1. Last time. Let us start with some clarifications from last
time. We introduced the L2theory: we have the Laplacian ∆ acting on
L2(Γ\H). Let me summarize what I meant tosay.
Consider ∆ on the universal covering H. It has two
eigenfunctions: for each s, we haveys and y1−s. All the other
eigenfunctions are generated by these. One can show that it
issufficient to take s = 1
2+ it to get the whole spectrum.
But we are interested in Γ\H, so what about eigenfunctions that
are invariant under
T =
(1 ∗0 1
), i.e., under x 7→ 1 + x? They have to be of the form
F (x) =∑n∈Z
an√yKs− 1
2(2π|n|y)e(nx)
where Ks− 12
is the modified Bessel function of second type. This is just
basic harmonic
analysis on H. We have shown that eigenfunctions have to be of
this form, but we haven’tconstructed any yet.
Consider the Dirichlet problem on any domain D in H: we want to
find a function F suchthat ∆F = λF and F = 0 on the boundary of D.
By the formal argument on H from lasttime (integral trick), we have
λ ∈ R+. In fact we have the stronger bound λ ≥ 14 .
Question: why does this not work for Γ\H? It is only a formal
argument after all. Inparticular, why would eigenvalues on Γ\H not
be at least 1
4? In fact, it does work a bit.
Assume for simplicity Γ = SL2(Z) (the argument will work for any
discrete subgroup).Let us fix a fundamental domain, which is in
particular a subset of H. We can try to solvethe Dirichlet problem
for a domain D inside the fundamental domain. For ∆F = λF , weget λ
≥ 1
4. But this is not a function on Γ\H. We can try to make this a
function on Γ\H
by averaging ∑γ∈stabilizer\SL2(Z)
F (γz).
Whenever this guy converges, the eigenvalue will be at least 14,
but there could be other
eigenfunctions not of this form! For example, Eisenstein series
that contribute to L2 willhave eigenvalues greater than 1
4.
Selberg’s conjecture states that λ ≥ 14
for Γ arithmetic. In representation theory languagethis is the
Ramanujan conjecture over the real place. The bound λ ≥ 0 still
holds, but weknow λ ≥ 1
4only for SL2(Z) and congruence subgroups of small level (up to
7).
12
-
3.2. Real-analytic Eisenstein series. Following the same game as
above, we start withthe eigenfunction ys+
12 and take the sum
E(z, s) =∑
γ∈Stab(y)\ SL2(Z)
Im(γz)s+12 =
∑±Γ∞\Γ
Im(γz)s+12 ,
where Γ∞ =
{(1 n0 1
) ∣∣∣∣ n ∈ Z} is the unipotent radical. We take s+ 12 because
things willbe more symmetric this way. Note that(
a bc d
)z =
az + b
cz + d⇒ Im(γz)s+
12 =
Im(z)s+12
|cz + d|2s+1.
We have the following facts:
(1) E(z, s) converges for Re(s) > 12. (This is easy.)
(2) E(γz, s) = E(z, s). (This is clear by construction.)(3)
∆E(z, s) = (1
4− s2)E(z, s).
(4) E(z, s) has an analytic continuation to C (in the
s-variable) with a simple pole ats = 1
2with residue 3
π.
(5) E(z, s) has the Fourier expansion
E(z, s) =∑n6=0
an(s)√yKs(2π|n|y)e(nx) + a0(y, s),
where
an(s) =4|n|sσ−2s(|n|)ζ(2s+ 1)
and
a0(y, s) = ys+ 1
2 +ξ(2s)
ξ(2s+ 1)y
12−s.
Here ξ(s) = π−s2 Γ( s
2)ζ(s) is the completed zeta function.
The important thing is there is a ratio of L-functions in a0 and
an L-function in an. Atthe end of the day, we want to do the
following. In this very special case one can calculatethe Fourier
expansion. Then one gets the analytic continuation and functional
equation ofE(z, s) by the known properties of ζ. BUT we don’t want
to do this by writing the Fourierexpansion. Instead we would like
to deduce properties about these L-functions from theanalytic
properties of E(z, s). For example, if we know (4) and (5) by some
other means,
then we know ξ(2s)ξ(2s+1)
has analytic continuation. For more general groups, we get a
product
of quotients of various L-functions in the constant term, but we
can still extract informationabout the individual L-functions by an
induction argument.
Although the first Fourier coefficient seems to contain strictly
more information than theconstant term, it is harder to compute and
may not even make sense for general groups. TheShalika–Casselman
formula was not available yet. Here is a bit of historical
perspective. Weare at about 1965. Shalika was quite senior but
Casselman was not yet. Then Shahidi camein around 1975.
13
-
3.3. A (classical) computation. The constant term is given
by
a0(y, s) =
∫ 10
E(x+ iy, s) dx.
Before we start computing this, let us remark that there is a
one-to-one correspondence
±Γ∞\ SL2(Z)←→ {(c, d) | gcd(c, d) = 1, c ≥ 0},
Exercise. Prove this bijection.
So, ∫ 10
E(z, s) dx =
∫ 10
∑(c,d)=1c≥0
ys+12
|cz + d|2s+1dx
= ys+12 +
∞∑c=1
∑d∈Z\{0}(c,d)=1
∫ 10
ys+12
|cz + d|2s+1dx+
∫ 10
ys+12
|x+ iy|2s+1dx.
The sum from c = 2 to ∞ contributes
ys+12
∞∑c=2
c−1∑α=1
(α,c)=1
∑k∈Z
∫ 10
dx
|cx+ α + kc+ ciy|2s+1
=ys+12
∞∑c=2
1
c2s+1
∑α
∑k∈Z
∫ 10
dx
|x+ αc
+ k + iy|2s+1
=ys+12
∞∑c=2
1
c2s+1
∑α
∑k∈Z
∫ k+1k
dx
|x+ αc
+ iy|2s+1
=ys+12
∞∑c=2
1
c2s+1
∑α
∫ ∞−∞
dx
|x+ αc
+ iy|2s+1
=ys+12
∞∑c=2
1
c2s+1
∑α
∫ ∞−∞
dx
|x+ iy|2s+1
=ys+12
∞∑c=2
ϕ(c)
c2s+1
∫ ∞−∞
dx
|x+ iy|2s+1.
We made a big fuss about this calculation, because this is the
unfolding that will appearnaturally later. The integral
ys+12
∫R
dx
|x+ iy|2s+1
is a specialization of the beta function. More precisely, it is
equal to
= y12−s∫
dx
(x2 + 1)s+12
= y12−sΓ(
12)Γ(s)
Γ(s+ 12).
14
-
Combining everything (including the c = 1 term), we get
a0(y, s) = ys+ 1
2 + y12−s
∞∑c=1
ϕ(c)
c2s+1Γ(1
2)Γ(s)
Γ(s+ 12).
Note by multiplicativity
∞∑c=1
ϕ(c)
c2s+1=∏p
(∞∑k=0
ϕ(pk)
pk(2s+1)
)=∏p
1− 1p2s+1
1− 1p2s
=ζ(2s)
ζ(2s+ 1)
and so
a0(y, s) = y12
+s +ξ(2s)
ξ(2s+ 1)y
12−s.
3.4. Why is the L2 theory helpful? Remember we are trying to
give a historical contexttoo. Before 1965, there was no spectral
theory and no Hilbert space in the theory of automor-phic forms.
Everything was complex-analytic and people were just using the
Petersson innerproduct. I am not sure if people proved the analytic
continuation and functional equation ofEisenstein series, but I can
now give a very soft analytic continuation result. We know
that:
• E(z, s) converges for Re(s) > 12.
• E(z, s) ∼ ys+ 12 + (∗)y 12−s (where (∗) is constant in the
y-variable). All the otherterms have exponential decay so are
negligible.
Claim. E(z, s) is the unique eigenfunction of ∆ with eigenvalue
14− s2, Re(s) > 1
2, and
asymptotic growth ∼ ys+ 12 +O(y−�) for some � > 0.
This is not hard to prove. Analytic continuation is often proved
in two ways: by studyingspecial integral transforms and their
kernels (which is how the theta and zeta functions
ariseclassically), or by using some uniqueness statements. For
example, Jacquet–Langlands uses(in the local case) the uniqueness
of Kirillov models, unlike Tate’s thesis which uses integrals.
Proof. Suppose there exists an F (z) satisfying this. Consider
the difference E(z, s)− F (z).(1) ∆(E(z, s)− F (z)) = (1
4− s2)(E(z, s)− F (z)).
(2) E(z, s)− F (z) ∈ L2(Γ\H).This means E(z, s)−F (z) is an
eigenfunction of ∆ on L2(Γ\H), so the eigenvalue has to be≥ 0
because ∆ is non-negative and symmetric on L2(Γ\H). This is where
the L2 theory isimportant.
As the final step, the eigenvalue is 14− s2. The assumption is
that Re(s) > 1
2, so either
s ∈ R with s > 12, or s has an imaginary part. The former
implies that 1
4− s2 < 0 and the
latter implies that 14− s2 is not real. In either case, 1
4− s2 cannot be an eigenvalue of ∆ in
L2(Γ\H).But E(z, s)− F (z) actually has eigenvalue 1
4− s2, so we conclude F (z) = E(z, s). �
The implication is that for Re(s) > 12, we have
E(z, s) = E(z,−s)a(s)for some function a(s) independent of z.
Why?
• s 7→ −s does not affect the eigenvalue.15
-
• SinceE(z, s) ∼ ys+
12 +
ξ(2s)
ξ(2s+ 1)y
12−s,
we know
E(z,−s) ∼ y12−s +
ξ(−2s)ξ(−2s+ 1)
y12
+s
and soξ(−2s+ 1)ξ(−2s)
E(z,−s) = y12
+s + (∗)y12−s.
Therefore
E(z, s) =ξ(−2s+ 1)ξ(−2s)
E(z,−s).
Remark. First of all we needed the fact that E(z,−s) was defined
with the same eigenvaluefor this argument to work (which the
Fourier expansion gives), so in a sense we are cheating,but this
does give the correct functional equation for Re(s) > 1
2. However in a sense this is
useless for the L2 theory. E(z, s) converges on Re(s) > 12,
but does not belong to L2 and does
not contribute to L2. By the functional equation we have
obtained the same for Re(s) < −12.
If we can analytically continue the Eisenstein series to the
critical line Re(s) = 0, these arethe ones that contribute to
L2.
“Contribution” means the following. What is the dual of (R,+),
i.e., Hom(R,C×)? Forany a ∈ C×, we have x 7→ e(ax), which gives an
isomorphism C× ∼= Hom(R,C×). In orderto look at L2(R), Fourier
theory says
f(x) =
∫Rf̂(z)e(−zx) dz.
We see that it is enough to take all the characters that are
unitary. Only the ones that areon the unitary axis are
“contributing”. Note also that none of the harmonics e(−zx) are
inL2, but they form L2.
Next time we will compute the volume of the fundamental domain,
prove the prime numbertheorem, and prove that there are finitely
many imaginary quadratic fields with class number1. These are kind
of random, but as you can guess they share a common theme: they
allfollow from the computations for E(z, s).
4. Lecture 4 (February 3, 2015)
4.1. Last time. We defined the Eisenstein series
E(z; s) =∑
±Γ∞\ SL2(Z)
Im(γz)s+12
where Γ∞ =
{(1 n0 1
) ∣∣∣∣ n ∈ Z} is the unipotent radical. We talked about how the
L2theory helped: using a very soft argument, we proved the
Proposition 4.1. E(z; s) is the unique eigenfunction of ∆
satisfying:
• ∆E(z; s) = (14− s2)E(z; s),
• growth ys+ 12 +O(y−�) for some � > 0, and16
-
• Re(s) > 12.
Remark. Re(s) > 12
is necessary for the convergence of E(z; s).
As a corollary, we get the functional equation.
Corollary 4.2. E(z, s) = A(s)E(z,−s) where A(s) is meromorphic
and Re(s) > 12.
Proof. We have
E(z, s) = ys+12 +
ξ(2s)
ξ(2s+ 1)y
12−s +O(e−
y2 )
where ξ(s) = π−s2 Γ( s
2)ζ(s) is the completed zeta function. We know
∆E(z, s) =
(1
4− s2
)E(z, s)
and so
∆E(z,−s) =(
1
4− s2
)E(z,−s).
We also have
E(z,−s) ∼ y−s+12 +
ξ(−2s)ξ(−2s+ 1)
y12
+s
and hence
E(z,−s)ξ(−2s+ 1)ξ(−2s)
∼ ys+12 +O(y−�).
By the proposition,
E(z, s) =ξ(−2s+ 1)ξ(−2s)
E(z,−s). �
This result does not really give the analytic continuation of
E(z, s). It simply flips theregions Re(s) > 1
2and Re(s) < −1
2. This only gives a philosophical argument for the
functional equation. The whole point is to go the other way
round:
(1) Analyze Eisenstein series (this is hard!).(2) Calculate
constant terms.
(2.5) Observe that they are L-functions.(3) Deduce the
properties of these L-functions from (1).
4.2. Analytic continuation. Today we will go back to the
original theory. Let us forgetabout all the stuff above.
Theorem 4.3 (Analytic continuation of Eisenstein series). E(z,
s), originally defined forRe(s) > 1
2, satisfies the following properties:
• E(z, s) has analytic continuation to the whole C (in the
s-variable).• E(z, s) = A(s)E(z,−s), where A(s) is meromorphic.•
∆E(z, s) = (1
4− s2)E(z, s).
• E(z, s) has a simple pole at s = 12
with residue 3π
.
17
-
4.3. Application: Prime number theorem. We will prove the
following form of theprime number theorem.
Theorem 4.4 (Prime number theorem). ζ(s) 6= 0 on the line Re(s)
= 1.
Proof. Recall
E(z, s) = ys+12 +
ξ(2s)
ξ(2s+ 1)y
12−s +
4
ξ(2s+ 1)
∑n6=0
|n|sσ−2s(|n|)Ks(2π|n|y)e(nx).
Consider Re(s) = 0. By the theorem, E(z, s) has no poles there.
Looking at the right handside, we see that ξ(2s+ 1) 6= 0 for Re(s)
= 0. �
This is a fairly straightforward argument, but it gives strictly
less information than theoriginal proof of the prime number theorem
by de la Vallée-Poussin, which gives a zero-freeregion of the
form
σ > 1− clog(|t|+ 2)
where s = σ + it and c > 0 is a constant.One can analyze the
spectral decomposition further to get a zero-free region. This
involves
looking at the inner products of truncated Eisenstein series. A
reference is Sarnak’s articleon his webpage2.
Note that once we have the Fourier expansion of E(z, s), the
formal argument above willactually prove the functional
equation.
4.4. Application: Gauss class number problem.
Theorem 4.5 (Deuring, Landau, Gronwall). There are finitely many
imaginary quadraticfields with class number 1.
Side remark. Landau and Gronwall knew that the claim follows
from the Riemann hypothesisfor L(s,
(−D
)) for all fundamental discriminants D < 0.
Proof. Suppose the Riemann hypothesis is false, so there exists
s0 such that ζ(s0 +12) = 0
with Re(s0) 6= 0. Let z0 ∈ H be a CM point, i.e., az20 + bz0 + c
= 0 for some a, b, c ∈ Z withb2 − 4ac = D. Then we have two
facts:
• For Γ = SL2(Z), Γz0 corresponds to the ideal class of(a,
b+
√D
2
)in Q(
√D).
• We can write
E(z0, s) =∑u,v∈Z
′ Im(z0)s+ 1
2
|uz + v|2s+1
=
(√|D|2
)s+ 12 ∑ ′ 1
|av2 − buv + cv2|s+ 12
=
(√|D|2
)s+ 12
ζz0
(s+
1
2
)2Available at
http://web.math.princeton.edu/sarnak/ShalikaBday2002.pdf.
18
http://web.math.princeton.edu/sarnak/ShalikaBday2002.pdf
-
where ζz0 is the zeta function for the ideal class corresponding
to z0. Recall that forany Galois extension K/Q,
ζK/Q(s) =∑I
1
N(I)s=
∑c∈Cl(K)
∑α∈K
1
(αc)s
and we define
ζc(s) =∑α∈K
1
(αc)s.
Summing the above over z0 ∈ ΛD,
∑z∈ΛD
E(z, s) =
(|D| 12
2
)s+ 12
ζQ(√D)
(s+
1
2
).
Note that
ζQ(√D)
(s+
1
2
)= ζ
(s+
1
2
)L
(s+
1
2,
(−D
)),
so evaluating everything at the hypothetical zero gives∑z∈ΛD
E(z, s0) = 0.
In particular, if the class number h(D) = 1, then this gives
E(zD, s0) = 0, where zD =1+√D
2.
By the Fourier expansion of E(z, s),
E
(1 +√D
2, s0
)=
(√|D|2
)s0+ 12+ (∗)
(√|D|2
) 12−s0
+O
(e−√|D|2
).
This cannot be all 0 for large |D| if s0 is not on the line
Re(s) = 0. Therefore, this can onlyhappen finitely many times, or
the Riemann hypothesis is true in which case we can use theabove
argument. �
Of course, we were cheating in this proof because we only
assumed the falsity of theRiemann hypothesis for ζ. We can indeed
push the argument a bit further for the Riemannhypothesis for
quadratic L-functions.
4.5. Application: Volumes of fundamental domains. Our last
application will be quitegeneral: the computation of the volumes of
the fundamental domains. This was one ofLanglands’ first
applications.
We want to calculate the volume of Γ\H, which is π3. The idea is
to calculate∫
Γ\HE(z, s) dz
by shifting the contour and picking up the pole at s = 12. But
there are two problems:
(1) In general E(z, s) /∈ L1(Γ\H) when it converges.(2) This
integral is 0 when Re(s) = 0.
19
-
We are going to modify this idea and somehow make it work.This
is how we are going around. Take f ∈ C∞0 (R>0), and consider the
series
θf (z) =∑±Γ∞\Γ
f(Im(γz)).
Recall that the Mellin transform is given by
f̃(s) =
∫ ∞0
f(y)ysdy
y.
The Mellin inversion is
f(y) =1
2πi
∫ ∞−∞
f̃(s)y−s ds.
The idea is to rewrite f as its own Mellin transform, which is a
very general method inanalytic number theory. The sum becomes the
Eisenstein series
θf (z) =1
2πi
∫Re(s)=s0
f̃(s)E
(z,−s− 1
2
)ds
where s0 > 1.One can compute ∫
Γ\Hθf (z) dz
in two ways: by shifting the contour, and by unfolding. Shifting
the contour gives∫Γ\H
θf (z) dz = f̃(−1) ·3
πvol(Γ\H) + 1
2πi
∫Re(s)=− 1
2
f̃(s)
∫Γ\H
E
(z,−s− 1
2
)dµ(z) ds,
whereas unfolding gives∫Γ\H
θf (z) dµ(z) =
∫±Γ∞\H
f(Im(γz)) dµ(z) =
∫ ∞0
f(y)dy
y2= f̃(−1).
So we have the following identity
f̃(−1) = f̃(−1) 3π
vol(Γ\H) + 12πi
∫s∈iR
f̃
(−s− 1
2
)∫Γ\H
E(z, s) dµ(z) ds.
For Re(s0) = 0, the integral ∫Γ\H
E(z, s0) dµ(z)
is 0, so we get
vol(Γ\H) = π3.
Among the things we did this is the most general. Langlands
calculated this for splitChevalley groups over Q. At the end of the
day, once we have an Eisenstein series, this willwork for more
general groups.
Next time we will leave the classical language and move to
groups and the adelic language.20
-
5. Lecture 5 (February 5, 2015)
5.1. Convergence of averaging. Let me comment on something way
earlier. We obtainedeigenfunctions ϕ by considering the Dirichlet
problem on any domain D in the upper halfplane H. Assume that we
can extend ϕ across the boundary of D. Then since Γ
actsdiscontinuously, for a fixed g, γg only hits D finitely many
times. So
∑ϕ(γg) converges.
5.2. Reminder (last time). Last time we were calculating the
volume of the fundamentaldomain. We got the following equality
f̃(−1)(
3
πvol−1
)=
1
2πi
∫Re(t)=0
I(t)f̃
(−t+ 1
2
)dt,
where
I(t) =
∫Γ\H
E(z, t)dx dy
y2.
We had a lemma last time.
Lemma 5.1. Suppose I(t) is defined. Then it is identically
zero.
A heuristic reason is that the Eisenstein series is orthogonal
to the constant function.
Proof. Consider1
2πi
∫Re(t)=0
I(t)f̃
(−t+ 1
2
)dt = cf̃(−1)
for all f ∈ C∞c (R≥0). We will show that if this is true, then
I(t) ≡ 0.We can shift the eigenvalue by taking F (y) = y
12f(y). Then F̃ (s) = f̃(s + 1
2). So we can
consider F in place of f :
1
2πi
∫Re(s)=0
F̃ (−s)I(s) ds = cF̃(−3
2
).
Now let G(y) = yF ′ + 32F . Then G̃(s) = (s + 3
2)F̃ (s). Since the above is true for all F̃ ,
we have the same relation
1
2πi
∫G̃(−s)I(s) ds = cG̃
(−3
2
)≡ 0,
so1
2πi
∫F̃ (−s)
(−s+ 3
2
)I(s) ds ≡ 0
for all F . This implies I(s) = 0. �
Therefore vol = 3π. This is a roundabout argument.
5.3. Modular forms as Maass forms. Today we will end our
discussion on H and passto G. We talked about modular forms and
real-analytic Eisenstein series.
Remark. All modular forms (holomorphic of weight k) are Maass
forms.
So far we have talked about Maass forms that are invariant, but
we need to define themmore generally.
Definition 5.2. A Maass form of weight k (for trivial
Nebentypus) is f : H→ C such that21
-
• f(γz) =(cz + d
|cz + d|
)kf(z) for all γ =
(a bc d
)∈ Γ0(N),
• ∆kf(z) = s(1 − s)f(z), where ∆k = −y2(∂2
∂x2+
∂2
∂y2
)+ iky
∂
∂xis the weight k
Laplacian, and• f is of moderate growth.
With this, modular forms become Maass forms.
Proposition 5.3. If f is a weight k holomorphic form, then F (z)
= yk2 f(z) is a Maass
form of weight k and eigenvalue k2(1− k
2).
Proof. Exercise. �
This unifies the picture of modular forms and Maass forms.
5.4. From H to group. We defined these Eisenstein series on the
upper half plane, whichis a manifold. Recall that the upper half
plane is a symmetric space
H = SL2(R)/ SO(2).
Instead of looking at H, we want to look at functions on SL2(R)
and more generally onGL2(R).
First we will go from a modular form f on H to a function on
GL+2 (R) = {g ∈ GL2(R) | det(g) > 0}.Let
j(g, z) =cz + d√det(g)
where g =
(a bc d
). If g ∈ SL2(Z), then det(g) = 1. This function satisfies the
following
properties:
• j(gh, z) = j(g, hz)j(h, z) (cocycle property). (Using this, we
can define half-integralweight modular forms over any groups, for
example over function fields. If we dothe same calculation over
anything that is not R, the formula for j(g, z) does notwork but
the cocycle condition is what generalizes. This turns out to be the
Hilbertsymbol.)
• j(gkθ, i) = eiθj(g, i) where kθ =(
cos θ sin θ− sin θ cos θ
)∈ SO(2).
If f is a modular form of weight k, we construct
φf (g) = j(g, i)−kf(gi).
This lifts to the group G and is invariant modulo Γ, i.e., φf is
a function on Γ\G.More generally, if f is a Maass form of weight k,
we define the cocycle
jMaass(g, z) =
(cz + d
|cz + d|
)1√
det(g)
then φf (g) = jMaass(g, i)−kf(gi) is a function on the
group.
22
-
5.5. To adelic group. Now we want to pass from a function φ on
GL2(Z)\G(R) to Φ onG(A).
Recall strong approximation. Suppose we have a group G/Q with S
a finite set of places.The pair (G,S) satisfies strong
approximation if∏
v∈S
G(Qv)×G(Q)
is dense in G(A). In practice we usually take S to be the
archimedean places. This essentiallymeans we can do Chinese
remainder theorem for the group G.
It is a fact that (SL(2), {∞}) has strong approximation. More
generally, we have the
Theorem 5.4 (Kneser). If G is simply connected and∏
v∈S G(Qv) is not compact, then(G,S) satisfies strong
approximation.
But we will not need such generality.Let G = SL2. Then G(A) =
G(Q)G(R)Kf , where we take Kf =
∏v finite G(Zv). Note
G(Zv) is a maximal compact for each v. Thus G(R)� G(Q)\G(A)/Kf
is onto.
Claim. G(Z)\G(R)↔ G(Q)\G(A)/Kf is a one-one correspondence.
Proof. Suppose g∞, g′∞ ∈ G(R) map to the same point in the
double coset. Then
g′∞ = γg∞kf
where γ ∈ G(Q) and kf ∈ Kf . We can write γ = γ∞ · γf for the
diagonal embeddingG(Q) ↪→ G(A), i.e., γ∞ = (γ, 1, 1, · · · ) and γf
= (1, γ, γ, · · · ). We have
g′∞ = γg∞kf = γ∞γfg∞kf = γ∞g∞γfkf .
Since g′∞ and γ∞g∞ are 1 at all finite places, we must have γf =
k−1f , so
γf ∈∏v finite
SL2(Zv) ∩ SL2(Q) = SL2(R)
and γ ∈ SL2(Z). �
We have already gone from f : Γ\H → C, to φf : Γ\G(R) → C.
Finally, we can go fromφf to a function on G(Q)\G(A)/Kf by
defining
F (gQg∞kf ) = φf (g∞).
The claim above shows that this is well-defined.Remember the
point of this course is to compute constant terms. What happens to
them
under this identification? We have∫ 10
f(x+ iy) dx =
∫N(Z)\N(R)
φf (ng) dn
where N =
{(1 ∗0 1
)∈ G(R)
}is the standard unipotent radical, and dn = dx. The coor-
dinates are
z = x+ iy ←→ gz =
(√y x√
y
0 1√y
)23
-
and ng =
(√y x+n√
y
0 1√y
).
5.6. Eisenstein series (adelically). Recall that
E(z, s) =∑c≥0
gcd(c,d)=1
ys+12
|cz + d|2s+1=
∑±Γ∞\ SL2(Z)
Im(γz)s+12 .
We lift this toESL2(g, s) := E(gi, s).
We do not need to deal with the j-factors because E(z, s) is
already invariant. ESL2 is definedon SL2(Z)\ SL2(R).
Remark. Diagonal matrices γ =
(a
a
)act trivially, so we can lift ESL2 to functions on
GL2 that are invariant under the center as well.
To define the adelic Eisenstein series, let us review the
Iwasawa decomposition. In general,G(k) = N(k)A(k)K(k), where N is
the unipotent radical, A is the diagonal matrices and Kis a maximal
compact. K looks significantly different for archimedean and
non-archimedeanplaces.
For g ∈ GL2(R),
g =
(a bc d
)=
(det(g)√c2+d2
ac+bd√c2+d2
0√c2 + d2
)(d√
c2+d2−c√c2+d2
c√c2+d2
d√c2+d2
)where the two matrices are in NA and K respectively. We will
need this for calculationslater.
Exercise. Prove this.
Then we define
EGL2(g, s) = E(gi, s) =∑
γ∈P (Z)\GL2(Z)
| Im(γgi)|s+12
where P = NA =
{(∗ ∗0 ∗
)∈ G
}is the parabolic. This can be expressed in terms of the
function ϕs,∞ : GL2(R)→ C defined by
g = gPgK 7→∣∣∣∣αgγg∣∣∣∣s+ 12
where gP =
(αg βg0 γg
).
Claim.E(i, s) =
∑γ∈P (Z)\G(Z)
ϕs,∞(γ).
Exercise. Prove this.24
-
In general,
EGL2(g, s) =∑
γ∈P (Z)\G(Z)
ϕs,∞(γg).
Now we are ready to work adelically. We will need to define a
function at each place. Letp be a prime and define ϕs,p : G(Qp)→ C
by sending
g 7→∣∣∣∣αgβg∣∣∣∣s+ 12p
where g = ngagkg with ag =
(αg
βg
). Finally, ϕs : G(A)→ C is defined as
ϕs(g) = ϕs,∞(g∞)∏p
ϕs,p(gp).
Definition 5.5.
E(g, ϕs) =∑
γ∈P (Q)\G(Q)
ϕs(γg).
We need to convince ourselves that P (Q)\G(Q) ↔ P (Z)\G(Z),
which is believable byclearing denominators. We will do this next
time.
Let me end by describing the constant terms. Given any f : G(A)→
C, its constant termalong the parabolic P is
cP (f, g) =
∫N(Q)\N(A)
f(ng) dn.
Proposition 5.6.
cP (E(−, ϕs), g) = ϕs(g) +ξ(2s)
ξ(2s+ 1)ϕ−s(g).
We will do this next time. This is analogous to the classical
constant term
ys+12 +
ξ(2s)
ξ(2s+ 1)y
12−s.
More generally, for any adelic characters µ1, µ2 : Q×\A× → C, we
can define
ϕµ1,µ2,s
(α
β
)= µ1(α)µ2(β)
∣∣∣∣αβ∣∣∣∣s+ 12
and extend to all of G(A). Then the Eisenstein series E(g,
ϕµ1,µ2,s) has constant term
cP (E(−, ϕµ1,µ2,s), g) = ϕµ1,µ2,s(g) +L∗(2s, µ1µ
−12 )
L∗(2s+ 1, µ1µ−12 )
ϕµ2,µ1,−s(g)
where L∗ is the completed L-function.25
-
6. Lecture 6 (February 10, 2015)
6.1. Last time. Last time we finally left the realm of classical
forms. We started withf : H → C, a Maass form or modular form on
SL2(Z), and associated to this φf :SL2(Z)\GL2(R) → C by multiplying
with the cocycle and f evaluated at i. Using strongapproximation,
we cooked up a function φ : GL2(Q)\GL2(A)→ C.
We will almost never turn back to the upper half plane picture,
unless we need someintuition. Instead we will focus on functions on
the adelic group.
6.2. Eisenstein series (adelic). Let me start by recalling the
Iwasawa decomposition
G = NAK.
(These decompositions are known before they were so named, at
least for special groups.)We will talk about general reductive
groups later, but for now, we have G = GL(2), N ={(
1 ∗1
)}⊆ G, A =
{(∗∗
)}⊆ G and K is a maximal compact. Over any ring, N and
A stay the same form but K will change.Over R, we have
G(R) = N(R)A(R)K(R)where we take K(R) = SO(2), not O(2) because
we can change the determinant by A. Wewrite
g =
(a bc d
)= pgkg
where pg ∈ NA = P and kg ∈ K. More explicitly,
g =
(det(g)√c2+d2
ac+bd√c2+d2
0√c2 + d2
)(d√
c2+d2−c√c2+d2
c√c2+d2
d√c2+d2
).
We will use this in a second.
Exercise. Prove this.
There is an analogous decomposition over any p-adic field. Now
we are looking at
G(Qp) = N(Qp)A(Qp)K(Qp)
where K(Qp) = GL2(Zp). Multiplying by an integral matrix on the
right corresponds tocolumn operations, so the valuations of c and d
play a role. Indeed,
g =
(a bc d
)= pgkg =
(det gd
b
0 d
)(1 0cd
1
)ifc
d∈ Zp,
(det gc
a
0 c
)(0 −11 d
c
)ifd
c∈ Zp.
We will not need this, because this decomposition is not unique
and we will start talkingabout functions that are invariant under
the maximal compact.
Exercise. Prove this.26
-
We will start with a function on A, extend to P = NA trivially
(note that A normalizesN), and induce to all of G.
We will adopt the following convention: p will denote a finite
prime of Q, and v will denoteany place of Q (including ∞3).
Definition 6.1. Define ϕs : G(A)→ C by
ϕs =∏
ϕs,v
where each ϕs,v : G(Qv)→ C is given by
ϕs,v(gv) = ϕs,v(ngvagvkgv) = ϕs,v(agv) =
∣∣∣∣αgvβgv∣∣∣∣s+ 12v
where agv =
(αgv 00 βgv
).
Definition 6.2.
E(g, ϕs) =∑
γ∈P (Q)\G(Q)
ϕs(γg).
This is ad hoc notation, which will be improved when we start
talking about generalEisenstein series.
Exercise. If we take g = (g∞,z, 1, · · · , 1, · · · ) where g∞,z
=
(√y x√
y1√y
), then E(z, s) =
E(g∞,z, ϕs,∞).
6.3. Constant term. The general calculation will follow the same
line, modulo a few com-plications. If we can break the group into
double cosets modulo P , then we are in goodshape. Recall the
Bruhat decomposition
GL2 = B tB(
11
)N
where B =
{(∗ ∗0 ∗
)}⊂ GL2.
Exercise. Prove this.
These are straightforward for GL2. In general, this is
essentially Gaussian elimination andwe will have
G =∐w∈W
BwN
where W is the Weyl group, or even more generally
G =∐w∈WP
PwNP
for any group with a Tits system.
3John Conway likes to call this −1.27
-
One consequence of the Bruhat decomposition is that
B\GL2 ↔(
11
)t wN
where w =
(1
1
). One needs to check that these are not equivalent under B.
Here B is a Borel, but can be replaced by any parabolic
subgroup.Given a function on any group, we can define its constant
term along any parabolic. We
can think of parabolics as ways of going to infinity. For GL2,
there is only one way. For GL3,there are three ways. We can think
of a manifold with pinches, where the pinches can havevarious
dimensions as the dimension of the manifold goes higher.
Recall that
cP,E(−,ϕs)(g) =
∫NB(Q)\NB(A)
E(ng, ϕs) dn.
Classically, the integral is computed overN(Z)\N(R). From now on
the argument is straight-forward:
cP,E(−,ϕs)(g) =
∫NB(Q)\NB(A)
∑B(Q)\G(Q)
ϕs(γng)
dn=
∫NB(Q)\NB(A)
∑( 1 1 )twNB(Q)
ϕs(γng)
dnby the Bruhat decomposition, where w =
(1
1
). Dropping the dependence of NB on B,
we write
cB(g) =
∫N(Q)\N(A)
ϕs(ng) dn+
∫N(Q)\N(A)
∑γ∈wN(Q)
ϕs(γng) dn.
We will see that these two terms correspond to ys+12 and y
12−s in the classical constant term
respectively.The first integral is easy. Since ϕs(ng) = ϕs(g)
for all n ∈ N , we have∫
N(Q)\N(A)ϕs(ng) dn = ϕs(g)
∫N(Q)\N(A)
dn.
We take the measure normalization∫N(Q)\N(A) dn = 1. This is
really the classical normaliza-
tion as done by Tate: N(A) = A and N(Q) = Q, so Q\A ' Z\R · Ẑ
with measures dx onZ\R and dµ on Qp respectively, normalized such
that µ(Zp) = 1.
The second integral can be written as∫N(Q)\N(A)
∑n0∈N(Q)
ϕs(wn0ng) dn =
∫N(A)
ϕs(wng) dn =∏v
∫N(Qv)
ϕs,v(wnvgv) dnv.
28
-
Again this corresponds to the classical computation∑n
∫ 10
dx
|x+ n+ iy|c=
∫R
dx
|x+ iy|c
by unfolding. Note we chose our function ϕs as a product
function; not all adelic functionsare product functions. Let me
call
cB,v(g) =
∫N(Qv)
ϕs,v(wngv) dn.
We are left with local computations, which are not very
hard.
(1) At v =∞,
cB,∞(g) =
∫N(R)
ϕs,∞(wng∞) dn
=
∫Rϕs,∞
((1
1
)(1 n0 1
)g∞
)dn.
Here is a simple calculation. Say g∞ = ngagkg.
Claim. (−1
1
)(1 N0 1
)(A
B
)=
(AB√
A2+N2B2−B2N√A2+N2B2
0√A2 +N2B2
)modulo SO(2) on the right.
Exercise. Prove this (using the Iwasawa decomposition).
This implies that
wng∞ =
αgβg√α2g+(n+ng)2β2g ∗0
√α2g + (n+ ng)
2β2g
modulo SO(2), where ag =
(αg
βg
), so
ϕs,∞(wng∞) =
∣∣∣∣ αgβgα2g + (ng + n)2β2g∣∣∣∣s+ 12∞
and the constant term is
cB,∞(g) =
∫R
|αgβg|s+12
(α2g + n2β2g)
s+ 12
dn.
Under n 7→ αgβgn, we get
cB,∞(g) = |αgβg|s+12
∣∣∣∣αgβg∣∣∣∣ ∫
R
dn
(α2g + n2α2g)
s+ 12
=|βs−
12
g αs+ 3
2g |
|α2s+1g |
∫R
dn
(1 + n2)s+12
29
-
=
∣∣∣∣αgβg∣∣∣∣ 12−s ∫
R
dn
(1 + n2)s+12
= ϕ−s,∞(g∞) ·Γ(1
2)Γ(s)
Γ(s+ 12)
which comes from a specialization of the beta function
Γ(a)Γ(b)
Γ(a+ b)= 2
∫ ∞0
v2a−1
(v2 + 1)a+bdv.
We have seen that the trivial Bruhat cell gives ϕs, and the
non-trivial one will giveϕ−s with an intertwining operation.
(2) At v = p,
cB,p(g) =
∫N(Qp)
ϕs,p(wngp) dn.
We want to get the Iwasawa decomposition of wngp.
Claim.
(−1
1
)(1 N
1
)(A
B
)=
(AN−B
0 NB
)if
A
NB∈ Zp,
(B 0
0 A
)ifNB
A∈ Zp,
modulo Kp on the right.
This implies, modulo Kp, that
wngp =
(αg
n+ng−βg
0 (n+ ng)βg
)if
αg(n+ ng)βg
∈ Zp,
(βg 0
0 αg
)if
(n+ ng)βgαg
∈ Zp,
so
ϕs,p(wngp) =
∣∣∣∣ αgβg(n+ ng)2∣∣∣∣s+ 12p
ifαg
(n+ ng)β∈ Zp,∣∣∣∣βgαg
∣∣∣∣s+ 12p
if(n+ ng)β
αg∈ Zp.
This implies∫Qpϕs,p
(w
(1 n0 1
)gp
)dn
=
∫vp(n)≤vp
(αpβp
) ϕs,p((αg
nnβg
))dn+
∫vp(n)>vp
(αpβp
) ϕs,p((
βgαg
))dn.
30
-
Recall that dn is such that∫Zp dn = 1. The rest is an easy
exercise, and we get∣∣∣∣αgβg
∣∣∣∣ 12−sp
{∞∑k=0
pk
p2sk+k
(1− 1
p
)+
1
p
}=
∣∣∣∣αpβp∣∣∣∣ 12−sp
1− 1p2s+1
1− 1p2s
.
This is a nice calculation essentially following Tate’s thesis.
The only thing to keepin mind is that
∫Z×pdn = 1− 1
p. So the constant term at p is
cB,p(gp) = ϕ−s,p(gp) ·ζp(2s)
ζp(2s+ 1).
Therefore, the constant term of E(g, ϕs) is
cB,E(−,ϕs)(g) = ϕs(g) +ξ(2s)
ξ(2s+ 1)ϕ−s(g).
For GL(n), the Weyl group is Sn. If we try to do these
integrals, for each Weyl elementwe will bring the 1’s to the
appropriate row and column using the Iwasawa decomposition.
This boring calculation above is illustrative. It brings out the
intertwining operator be-tween s and −s clearly.
At the beginning of the lecture we defined ϕs : G(A) → C. Let me
now define a slightlymore general ϕs. Let µ1, µ2 : Q×\A× → C× be
two characters, with µ1 =
⊗µ1,v and
µ2 =⊗
µ2,v. Define
ϕ(µ1,µ2,s)(g) =∏v
ϕ(µ1,v ,µ2,v ,s)(gv)
where the functions ϕ(µ1,v ,µ2,v ,s) are again defined using the
Iwasawa decompostion gv =nvavkv.
For v where µ1 and µ2 are unramified, i.e., µ1|Z×v = µ2|Z×v = 1
(so they depend on theuniformizer only), we have
ϕ(µ1,v ,µ2,v ,s)(gv) = µ1,v(αgv)µ2,v(βgv)
∣∣∣∣αgvβgv∣∣∣∣s+ 12v
where av =
(αgv
βgv
).
Exercise. Calculate∫N(Qp)
ϕ(µ1,p,µ2,p,s)(wngp) dn = ϕ(µ2,p,µ1,p,−s)(gp) ·Lp(2s, µ1µ
−12 )
Lp(2s+ 1, µ1µ−12 )
.
In general, for the ramified places, we can consider an
analogous decomposition where themaximal compact K is replaced by
something smaller.
Next time I will start talking about the general theory of
reductive groups.
7. Lecture 7 (February 12, 2015)
7.1. Algebraic group theory. Today I will go over things like
roots, weights and thestructure theory of groups. We will give
examples as we go on. This is necessary becausethe Eisenstein
series is defined by some data on the group. If you have not seen
this before,
31
-
the basic objects are characters and cocharacters. Let us fix a
field F , which we assume isa finite extension of Q.
There is a theory over F and F , which can be connected using
cohomological descent.We will study the theory over F only, i.e.,
focus on split groups, because that is the caseconsidered in
Langlands’ book Euler Products. Eventually we may do something with
quasi-split groups.
Definition 7.1. An algebraic group defined over F is an F
-variety with multiplication andinversion F -morphisms.
Algebraic groups naturally split into two categories, namely
abelian varieties and linearalgebraic groups, but we will not talk
about abelian varieties.
Definition 7.2. A linear algebraic group over F is a Zariski
closed subgroup of GL(N).
From now on, whenever we say “algebraic group” it will mean
“linear algebraic group”.
Example 7.3.
(1) G = GL(N).(2) G = SL(N).
Once we leave the realm of these two, we encounter the problem
of representingthese groups. We will need to fix a symplectic form,
which we might change overtime.
(3) G = Sp(2n) = {g ∈ GL(2n) | tgJg = J}, where
J =
−1
. ..
−11
. ..
1
.
(4) Let K/F be a separable extension, and H be an algebraic
group over K. Then thereexists an algebraic group G = ResK/F (H)
over F such that G(F ) = H(K). Thereis an actual construction of
this using the coordinate ring, but I will tell you a moreconcrete
example.
For example, let K/F be a quadratic extension, so K = F (√d)
where d ∈
F×\(F×)2. Take H = Gm. Then
ResK/F (H)(F ) =
{(a bdb a
) ∣∣∣∣ a2 − b2d 6= 0, a, b ∈ F} .All tori over quadratic étale
algebras arise this way. Note ResC/R(Gm) = S is theDeligne torus,
which is related to Hodge theory.
(5) Unitary group. For this example, let F/F+ be a quadratic
extension with conjugationc, and V be an n-dimensional vector space
over F with a Hermitian pairing h :V × V → C, i.e., h(α, ϕβ) =
c(ϕ)h(α, β). Then
U(V ) = {g ∈ GL(V ) | h(gα, gβ) = h(α, β) for all α, β ∈ V
}.Remark. Given any one of these defined over Z, one can consider
its points over any ring.
32
-
Example 7.4. GLN(R) = {g ∈ Matn(R) | det(g) ∈ R×}.
Now we will talk about the radical of a group, because we want
to avoid the solvable part.
Definition 7.5 (Radical). Let G be a connected algebraic group.
The radical R(G) of G isa maximal connected solvable normal
subgroup of G.
Definition 7.6 (Unipotent radical). The unipotent radical Ru(G)
is a maximal connectedunipotent normal subgroup of G.
Example 7.7.
• For G = GL(N), R(G) = Z0(G) and Ru(G) = 1.• For G = SL(N),
R(G) = 1 and Ru(G) = 1.• For B the Borel subgroup of GL(N)
consisting of the upper triangular matrices,R(B) = B and Ru(B) is
the set of upper triangular matrices with 1’s on the diagonal.
Note Ru(G) is always a subgroup of R(G). We defined these in
order to give the
Definition 7.8. An algebraic groupG is semisimple if R(G) = 1,
and reductive if Ru(G) = 1.
Thus GLn is reductive but not semisimple, SLn and PGLn are
semisimple, and B is neitherreductive nor semisimple.
For any such groups, we have the Levi decomposition, which will
be used all the time inthe parabolic case. Let G be connected over
F . Then there exists a reductive M such that
G = M ·Ru(G).This is a semi-direct product, where M normalizes
Ru(G).
If G is reductive, then G = R(G) ·G′ where G′ = [G,G] is the
derived group and we havethat R(G) ∩ [G,G] is finite. For example,
for GL(N), this is just the roots of unity.
Example 7.9 (Levi decomposition). For the Borel, B = M · Ru(B)
where M is the set ofdiagonal matrices and Ru(B) is the set of
upper triangular matrices with 1’s on the diagonal.
Example 7.10. For G = P2,3,1 ⊆ GL(6) consisting of block upper
triangular matrices withblock sizes (2, 3, 1), M is the block
diagonal matrices and Ru(G) is the subgroup of matriceswith
identity matrices on the block diagonal, i.e.,
∗ ∗∗ ∗ ∗ ∗
∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
∗
∗
=∗ ∗∗ ∗
∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
∗
·
11
∗ ∗
11
1∗
1
.Now comes the important piece: the torus. The structure theory
goes through this.
Definition 7.11 (Algebraic torus). An algebraic group T defined
over F is called a torus ifT ' Gkm over F .
Example 7.12.
• Diagonal matrices in GL(N).• ResK/F (Gm).
33
-
Definition 7.13 (Split torus). A torus is called split over F if
T ' Gkm over F .
I will say a couple of their properties in a second.
Definition 7.14 (Borel). A Borel subgroup is a maximal connected
solvable subgroup of G.
This may not exist over the base field, but always exists over
C.
Definition 7.15 (Quasi-split group). A group is called
quasi-split over F if the Borel isdefined over F .
Everything I wrote down is quasi-split so far. The theory of
automorphic forms goes intotwo parts: quasi-split groups, and the
inner forms of the group. We will not talk about theinner forms. In
order to define the Eisenstein series, we need the Borel.
Note that tori and Borel subgroups are not canonically defined,
but they are all conjugateto each other. More precisely, we have
the following properties:
(1) All Borels are F -conjugate. (More generally, all minimal
parabolics are F -conjugate.)(2) All maximal split tori are F
-conjugate.
Definition 7.16 (Rank). The F -rank of a group is the dimension
of a (hence all) maximalsplit torus over F .
Once we have all these definitions, we will start talking about
weights and roots attachedto them. We linearize the theory by
looking at the Lie algebra, which can be decomposedinto subspaces
depending on the characters of the maximal torus. Since they are
nice vectorspaces, we can calculate things with them instead of
looking at the group itself.
7.2. Roots, weights, etc.
Definition 7.17 (F -character lattice). X∗F (G) = HomF
(G,Gm).
Example 7.18. Take G = Gm. Then X∗(G) ' Z is given by t ∈ Gm(F )
7→ tk.
Example 7.19. Take K = C and F = R, and consider G = ResK/F
(Gm). Then
G(R) ={(
a −bb a
) ∣∣∣∣ a, b ∈ R, a2 + b2 6= 0} ' R+ · SO(2) ' C×and G(C) ' G2m.
We have X∗R(G) ' Z and X∗C(G) ' Z2.
Definition 7.20 (F -anisotropic). A torus is called F
-anisotropic if X∗F (T ) = {1}.
Example 7.21. SO(2,R) is anisotropic, because all (non-trivial)
characters are defined overC.
Every torus has an anisotropic (compact) part and a split
part.
Definition 7.22 (Cocharacter lattice). X∗(T ) = HomF (Gm, T
).34
-
Example 7.23. X∗(Gm) ' Z, and more generally X∗(GLn) ' Zn given
by
α 7→
1. . .
1α
1. . .
1
.
Weyl group will lead to the classification of roots and
weights.
Definition 7.24 (Weyl group). Let T be a maximal F -split torus
of G. Then
WF (G) := NG(T )/ZG(T ).
Every s ∈ NG(T ) acts on T via s 7→ (ws : t 7→ sts−1).
Example 7.25.
• For G = GL(N), WF (G) = Sn.• For G = Sp(2n), WF (G) = (Z/2Z)n
o Sn.• For G = SO(2n+ 1), WF (G) = (Z/2Z)n o Sn.
For the rest of the class, I will give a classification of
Chevalley groups. Langlands’ bookis on Chevalley groups, which are
defined over Z. They are split, and essentially one of thesimplest
groups one can consider, but also fairly general.
From now on, we fix the following notations:
• G will be a connected reductive group over F .• Lie(G) is the
Lie algebra ofG. One can define it using derivations, or more
canonically
as
Lie(G) = ker(G(F [t]/t2)→ G(F )
).
• T is a maximal split torus of G. Then T acts on Lie(G) by X 7→
tXt−1.We get
Lie(G) = gT0 ⊕∑α∈Φ
gTα
where
gTα = {X | tXt−1 = α(t)X}for α ∈ X∗(T ).
Definition 7.26 (Roots). The elements of Φ are called F
-roots.
Example 7.27. Take G = GL(n) and T the torus consisting of all
diagonal matrices. Let
αj : T → C× be
a1 . . .an
7→ aj. Then the roots are given by eij = αi − αj for all1 ≤ i 6=
j ≤ n, and geij is the matrix with 1 at the (i, j)-entry and 0’s
elsewhere.
35
-
Example 7.28. Take G = Sp(4) and T =
t1
t2t−12
t−11
. Define
α1
t1
t2t−12
t−11
= t1 and α2t1
t2t−12
t−11
= t2.Then the roots are
Φ = {±(α1 ± α2),±2α1,±2α2}.The Lie algebra of G is
g =
{(A BC D
) ∣∣∣∣ A = (u vw x), D =
(−x −v−w −u
), B =
(α ββ′ α
), C =
(γ θθ′ γ
)},
and we have
gα1−α2 =
0 v0 0
0 −v0 0
.Note that X∗F (T ) has a Weyl action. Take χ ∈ X∗F (T ). Then χ
◦ ws = χ(s • s−1). Weyl
group moves the Weyl chambers around, so if we study one chamber
we will know somethingabout the whole space. Choosing a Weyl
chamber means fixing a Borel subgroup.
There is a natural pairing between X∗ and X∗. Recall that HomF
(Gm,Gm) ∼= Z byα 7→ αk. Define 〈·, ·〉 : X∗ ×X∗ → Z by 〈α, µ〉 = k,
where α ◦ µ : Gm → Gm so there existsk such that α ◦ µ(t) = tk,
i.e.,
α ◦ µ(t) = t〈α,µ〉.Definition 7.29 (Coroots). For a ∈ Φ, let a∨ ∈
X∗(T ) be such that 〈α, α∨〉 = 2 (essentially).
Example 7.30. For G = GL(n), eij = αi − αj sends
t1 . . .tn
7→ titj
. Then the dual
e∨ij : Gm → T sends t 7→
. . .
t. . .
t−1
. . .
(where t and t−1 appear on rows i and j
respectively).
Example 7.31. For Sp(4), we have
(α1 − α2)∨ : t 7→
tt−1
tt−1
36
-
and
(2α1)∨ : t 7→
1
tt−1
1
.Definition 7.32 (Root system). LetG be semisimple with maximal
torus T . Then (X∗(T ),ΦT ,WT )is called a root system.
Definition 7.33 (Simple roots). A root α ∈ Φ is called simple if
α 6= β + γ for β, γ ∈ Φ+.Denote the set of simple roots by ∆.
We are not going to define the positive roots Φ+, but a
down-to-earth way of thinkingabout them is that they give the
Borel. For example, for GL(n) they are eij for i ≥ j.
Theorem 7.34 (Chevalley).
(1) Given any abstract root system (X,Φ,W ), there exists a
connected semisimple G suchthat X = X∗(G) and Φ = ΦG.
(2) If (X1,Φ) and (X2,Φ) are such that X1 ↪→ X2 fixing Φid→ Φ,
then there exists an
isogeny G1φ→ G2.
Theorem 7.35 (Classification). There are four infinite families
and five exceptional irre-ducible root systems (X,Φ,W ):
• An = SL(n+ 1), Bn = SO(2n+ 1), Cn = Sp(2n), Dn = SO(2n);• E6,
E7, E8, F4, G2.
We know all these groups. We will look at their parabolic
subgroups and define theEisenstein series.
8. Lecture 8 (February 17, 2015)
8.1. Root systems. Last time we had an overview of the theory of
algebraic groups. Tosummarize, we classified the Chevalley groups
over any fields.
We did not have time to define an abstract root system Φ:
• Φ is a finite set, and V = SpanR(Φ).• For every α ∈ Φ, there
exists a reflection sα such that:
– sα(Φ) ⊆ Φ.– sα fixes a codimension 1 subspace.
• If α, β ∈ Φ, then sα(β) − β ∈ αZ. (So we can define the coroot
α∨ by sα(β) =β − 〈β, α∨〉α.)
We say that
• Φ is reduced if α ∈ Φ and n · α ∈ Φ (n ∈ Z) imply n = ±1.• Φ
is irreducible if Φ 6= Φ1 ⊥ Φ2 with Φi 6= ∅.
Theorem 8.1. There is a one-to-one correspondence
{reduced irreducible root systems} ↔ {An, Bn, Cn, Dn, E6, E7,
E8, F4, G2},where
• An = SL(n+ 1), n ≥ 1,37
-
• Bn = SO(2n+ 1), n ≥ 1,• Cn = Sp(2n), n ≥ 3,• Dn = SO(2n), n ≥
4.
Note that all of these can be realized by semisimple split
groups (in fact by Chevalleygroups). A very similar classification
exists for reductive groups. Let G be a reductive groupand T be a
maximal torus.
Definition 8.2 (Root datum). A root datum is Ψ = (X, Y,Φ,Φ∨)
where X and Y are freeabelian groups of the same rank with a
pairing 〈·, ·〉 : X × Y → Z (so that X ∼= Hom(Y,Z)and Y ∼=
Hom(X,Z)), Φ and Φ∨ are finite subsets with a bijection Φ↔ Φ∨, and
there is anaction of Weyl group.
This is an abstract root datum. For reductive groups, we will
have
(X, Y,Φ,Φ∨) = Ψ(G, T ) = (X∗, X∗,Φ(G, T ),Φ(G, T )∨).
A root datum is reduced if 2α /∈ Φ whenever α ∈ Φ.Theorem 8.3
(Classification). Given (X, Y,Φ,Φ∨), there exists a unique (up to
isomor-phism) connected reductive group G over Q and maximal torus
T such that Ψ = Ψ(G, T ).
Langlands only considered split Chevalley groups in his book. In
1967–68, the theory ofreductive groups was not very well
understood. To talk about Eisenstein series, one needs todo
parabolic induction and talk about points over Zp. These groups are
all defined over Z, sowe can immediately go on to the calculation
of constant terms without any linear algebraicgroup theory.
8.2. Parabolic (sub)groups. Informally, a parabolic subgroup is
a subgroup that measuresinfinity. It is a measure of the
obstruction to G being anisotropic (informally, compact). IfP is a
parabolic subgroup, then G/P will be compact (for example, look at
the Iwasawadecomposition).G will always be a connected reductive
group.
Definition 8.4 (Borel subgroup). A maximal connected solvable
closed subgroup B of G iscalled a Borel subgroup.
Definition 8.5 (Parabolic subgroup). A parabolic subgroup P is a
closed subgroup contain-ing a Borel. (More intrinsically, a
parabolic subgroup is one for which G/P is projective.)
The standard example of a Borel is the upper triangular
matrices, and a parabolic is anysubgroup containing it.
The main properties of parabolic subgroups P are:
• P is connected.• NG(P ) = P .• If P is conjugate to P ′ with
P, P ′ ⊃ B, then P = P ′.• Bruhat decomposition: G =
∐w∈W BwB.
All Borels are conjugate in G(F ) whenever they are defined over
F . In a sense this saysthat Borels are intrinsic to infinity. The
same is true for all minimal parabolics (which mightnot be the
Borel, if the Borel is not defined).
It would take a whole course to prove these results, so instead
I will put up some references.Some standard ones are:
38
-
• Humphreys, Linear Algebraic Groups.• Springer, Linear
Algebraic Groups.• Borel, Linear Algebraic Groups.
Example 8.6.
• Let G = GL(n). We can take B to be the set of all upper
triangular matrices
B =
∗ ∗. . .
0 ∗
∈ GL(n) .
An example of a parabolic is
P =
{(GL(n1) ∗
0 GL(n2)
)∈ GL(n)
∣∣∣∣ n1 + n2 = n} .• Let G = SL(n), and B be the set of upper
triangular matrices. Then
P =
{(GL(n1) ∗
0 GL(n2)
)∈ SL(n)
∣∣∣∣ n1 + n2 = n}is a parabolic. Even if we are studying
semisimple groups, reductive groups show upnaturally.
Parabolic subgroups are subsets of simple roots. One should
think of each conjugacy classof parabolics as a boundary component
of the group. We are interested in functions onsymmetric spaces,
such as G/K. In GL(2), there is only one way to go to
infinity:(
aa−1
)(1 n0 1
).
In GL(3), there is more than one way. In order to decompose
L2(Γ\G), we need to understandthe behavior at infinity, so we want
to know the constant terms, which are integrals overparabolics.
Simple roots are a subset ∆ ⊆ Φ such that:• ∆ spans Φ.• For any
β ∈ Φ, we have β =
∑α∈∆ cαα where cα are either all ≥ 0 or all ≤ 0.
Given simple roots ∆, the positive roots are Φ+ = {∑
α∈∆ cαα | cα ≥ 0}.Let us look at the prime example of GL(n).
Example 8.7. Let G = GL(n). Let αi : T → C× be the root
t1 . . .tn
7→ ti andeij : T → C× be
t1 . . .tn
7→ titj
. (Eventually we will linearize everything and look at
these in additive notations. It is sufficient to look at the
tangent space (Lie algebra) becausethe fundamental group does not
really matter to the behavior at infinity.) Then we can take
Φ = {eij | 1 ≤ i 6= j ≤ n},39
-
∆ = {ei,i+1 | 1 ≤ i ≤ n− 1},Φ+ = {eij | 1 ≤ i < j ≤ n}.
We have the following facts:
(1) The choice of a Borel is the same thing as fixing Φ+. Note
that Φ+ involves orderingthe roots and looking at a positive cone.
This is because the unipotent radical canbe written as
Ru(B) =∏α∈Φ+
Uα.
For GL(n), Ueij = Uij is the set of unipotent matrices with 1 at
the (i, j)-th entry.Then
∏eij∈Φ+
Uij =
1 ∗. . .
1
.
(1’) Levi decomposition: B = T nRu(B), where the Levi component
is a split torus T .(1”) Given a base ∆, the group generated by T
and Uα (α ∈ Φ+) is a Borel.(2) The Borels containing a fixed T are
in one-one correspondence with the set of bases
of Φ.
There are a lot of choices in the theory. We started with a
reductive group, with the choiceof a torus. Then we choose a Borel.
But once we have fixed these, we can start talking aboutthings
which are standard to those choices.
Definition 8.8 (Standard parabolics). Fix a Borel (or a minimal
parabolic) B. Any Pcontaining B is called standard.
The structure of L2(Γ\G) is very inductive: it will be composed
of pieces, each of whichcorresponds to a parabolic subgroup. This
was understood in the 1960’s by Gelfand and histroop. They did this
for GL(2) and probably for SL(n). The Eisenstein series tells us
thatfor each parabolic, there is a map into L2. These form the
continuous spectrum, and theintertwining maps come from the
Eisenstein series.
8.3. Decompositions of parabolics. Fix G, T,B,∆ as before.
8.3.1. Levi decomposition. Fix a parabolic P ⊇ B. Then
P = MPNP
where MP is the Levi component and NP = Ru(P ) is the unipotent
radical. Moreover,
• MP normalizes NP .• NP is uniquely determined by P .• MP is
unique up to conjugation by P .
The same thing works if B is replaced by any minimal
parabolic.40
-
8.3.2. Langlands decomposition.
Definition 8.9 (Split component). A split component of G is a
maximal split torus of theconnected component of the center of
G.
This is an annoying definition. For GL(n), the split component
is Z0(G). For SL(n), thesplit component is trivial. The reason for
looking at this is the following.
By the Levi decomposition P = MPNP . This can be further broken
down into
P = M1PAPNP
where M1PAP = MP and AP is the split component of MP .
Example 8.10. For GLn(R), if P is the parabolic of block upper
triangular matrices withblock sizes (n1, · · · , nr), then
MP =
GLn1
GLn2. . .
GLnr
and NP =In1 ∗
In2. . .
Inr
are the block diagonal matrices and block unipotent matrices
respectively. In this case, wehave
AP =
z1. . .
z1z2
. . .z2
. . .zr
. . .zr
and M1P =
g1
g2. . .
gr
where zi > 0 and gi ∈ GL(ni) with det(gi) = ±1.
8.3.3. Explicit construction of the Langlands decomposition. Fix
G, T,B. For each P ⊇ B,there exists ∆P ⊆ ∆ such that
P = G(Σ∆P0 )T∆P0 U∆P0 = M1PAPNP
where ∆P0 = ∆\∆P . This notation is in accordance with Arthur’s
article on the traceformula.
Example 8.11. Consider the Dynkin diagram of An−1 corresponding
to GL(n). Each par-abolic is a block of GL(ni)’s, and we are
removing the roots between GL(ni) and GL(ni+1).
◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ · · · ◦ ◦ ◦ ◦
◦GL(n1)
◦ ◦ ◦ ◦GL(n2)
◦ ◦ ◦ · · · ◦GL(nr)
◦ ◦ ◦
In general, ∆P is the set of roots that we are removing from the
Dynkin diagram. We set
Σ∆P0 = {nα | α ∈ ∆P0 , n ∈ Z} ∩ Φ
41
-
and
Σ+∆P0
= {nα | α ∈ ∆P0 , n ∈ Z} ∩ Φ+.
Then the Langlands decomposition can be constructed as:
• AP =
⋂α∈∆P0
ker(α)
0.• G(Σ∆P0 ) is the analytic subgroup over R corresponding to m1
in the Lie algebra
decomposition
m = a⊕m1where a is the Lie algebra of AP .
• U∆P0 =∏α∈ΦP
Uα, where ΦP = Φ+\Σ+
∆P0.
They satisfy the following properties:
(1) MP is the centralizer of AP .(2) G(Σ∆P0 )
0 = [MP ,MP ]0.
(3) AP ∩G(Σ∆P0 ) is finite.
Example 8.12. Let G = GL(n), T be the diagonal matrices, B be
the upper triangularmatrices, Φ = {eij | 1 ≤ i 6= j ≤ n}, and ∆ =
{ei,i+1 | 1 ≤ i ≤ n − 1}. If we take ∆P = ∆,then ∆P0 = ∅, i.e., we
are taking out all the roots. Then
AP = T0,
M1P =
∗ . . .∗
with each ∗ ∈ {±1}, and
NP =∏α∈Φ+
Uα =
1 ∗. . .
1
.
9. Lecture 9 (February 19, 2015)
9.1. Last time. Last time we talked about parabolics, which
correspond to subsets of thesimple roots.
Let G be a connected reductive group, T be a maximal torus and B
a Borel (minimalparabolic), Φ be the roots and ∆ be the simple
roots. There is a one-one correspondence
{standard parabolics} ↔ {subsets ∆P ⊆ ∆}.
The correspondence goes like this. For ∆P , we set ∆P0 = ∆\∆P
and define
Σ+∆P0
= (Z-span of ∆P0 ) ∩ Φ+.42
-
Then
AP =
⋂α∈∆P0
ker(α)
0 , NP = ∏Φ+\Σ+
∆P0
Uα.
We have also defined M1P .For GL(N), we have the following
picture:
Dynkin diagram ∆P P
∅ ∆ B◦ ◦ αn1−αn1+1 ◦ ◦ ···αnr−αnr+1 ◦ ◦ {αn1 − αn1+1, · · · ,
αnr − αnr+1} Pn1,··· ,nr
◦ ◦ αi−αi+1 ◦ ◦ {αi − αi+1} Pi,n−i◦ ◦ ◦ ◦ ◦ ◦ ∅ G
Example 9.1. Sp(4) = {g ∈ GL(4) | tgJ0g = J0} where J0 =(
0 −JJ 0
)and J =
(1
1
).
More explicitly, we can write its Lie algebra as
sp(4) =
{(A BC D
)∣∣∣∣A = (u vw x), B =
(b11 b12b21 b11
), C =
(c11 c12c21 c11
), D =
(−x −v−w −u
)}
and we choose
t =
u
x−x
−u
.
The roots are
Φ = {±(α1 ± α2),±2α1,±2α2},
and some of the root spaces (corresponding to the unipotent
subgroups Uα) are:43
-
α gα
α1 − α2
0 v0 0
0 −v0 0
α2 − α1
0 0w 0
0 0−w 0
α1 + α2
b 00 b
2α1
0 b0 0
2α2
0 0b 0
Let B be the upper triangular matrices in G = Sp(4),
corresponding to the simple roots
∆ = {α1 − α2, 2α2}. Then the standard parabolics are:
∆P0 ∆P P
∅ ∆ B
{2α2} {α1 − α2}
∗ ∗ ∗ ∗∗ ∗ ∗∗ ∗ ∗
∗
{α1 − α2} {2α2}
∗ ∗ ∗ ∗∗ ∗ ∗ ∗
∗ ∗∗ ∗
(Siegel parabolic)∆ ∅ G = Sp(4)
When one learns Lie theory, the root spaces are usually
one-dimensional, but this is notalways the case.
Example 9.2 (Non-quasisplit). Fix a base field F and consider
the quadratic form
QF = x1xn + x2xn−1 + · · ·+ xqxn−q+1 +Q0(xq+1, · · · ,
xn−q),44
-
where Q0 is anisotropic. The matrix is
1
. ..
1Q0
1
. ..
1
.
A maximal F -split torus of SO(Q) is
T =
t1. . .
tq1
. . .
1t−1q
. . .
t−11
.
The Levi component is the centralizer of the maximal torus:
CG(T ) =
∗. . .
∗SO
∗. . .
∗
.
The minimal parabolic is
Pmin =
∗ ∗
. . .
�. . .
∗
.
Note that the Borel is not defined over F .
Remark. Root spaces are not always 1-dimensional. In this
example, they are either 1-dimensional or (n− 2q)-dimensional.
45
-
9.2. Character spaces. Now I will start defining spaces which
will be some linearizationof groups, like the Lie algebra. The
notations will be heavy.
From now on (G,B) (and all related data) will be fixed. Let P be
a standard parabolic,with Levi decomposition
P = MPNP = APM1PNP .
Recall the characters X∗(G) = Hom(G,Gm).
Definition 9.3.a∗P = X
∗(MP )⊗ R, a∗P,C = a∗P ⊗R C,aP = Hom(X
∗(MP )Q,R), aP,C = aP ⊗R C.
Example 9.4. Take P = G = GL(N). Then X∗(GL(N)) = 〈det〉, so a∗P
' R.Example 9.5. Take P = Pn1,n2,n3 . Then X
∗(MP ) = 〈det(n1), det(n2), det(n3)〉, so a∗P ' R3.Remark. X∗(AP
) ⊇ X∗(P ) as Z-modules, but are not necessarily equal. These are
equal forthe Borel, but that is essentially the