Top Banner
Poincar´ e inequality and dimension free concentration Bounds on the deficit in the log-Sobolev inequality Variance estimates for log-concave probabilities Some applications of optimal transport in functional inequalities Natha¨ el Gozlan * * LAMA Universit´ e Paris Est – Marne-la-Vall´ ee Some applications of optimal transport in functional inequalities Toulouse, April 2014 Natha¨ el Gozlan Applications of optimal transport in functional inequalities
136

Some applications of optimal transport in functional inequalities · 2014. 4. 5. · Natha el Gozlan Applications of optimal transport in functional inequalities. Poincar e inequality

Jan 31, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Some applications of optimal transport infunctional inequalities

    Nathaël Gozlan∗

    ∗LAMA Université Paris Est – Marne-la-Vallée

    Some applications of optimal transport in functional inequalitiesToulouse, April 2014

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Outline of Lecture 3

    Part III -

    Recent developments

    III.1 Poincaré inequality and dimension free concentration.

    III.2 Bounds on the deficit in the log-Sobolev inequality for the standardGaussian.

    III.3 Transport proofs of variance estimates for log-concave probabilities.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Characterization of PI in terms of dimension freeconcentration

    Theorem. [G.-Roberto-Samson ’12]

    Suppose that µ satisfies the dimension free concentration property with aconcentration profile α such that α(t) < 1/2 for at least one value of t > 0,then µ satisfies PI(C) with the constant

    C =

    (inf

    {r

    Φ−1

    (α(r)): r s.t. α(r) < 1/2

    })2,

    where

    Φ(t) =1√2π

    ∫ +∞t

    e−u2/2 du.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Proof of the theorem

    Three ingredients :

    An equivalent form of concentration of measure in terms of deviationinequalities for the functions Qt f .

    The Hamilton Jacobi equation ∂Qt f (x)∂t

    = − 14‖∇Qt f ‖2(x).

    Central limit theorem.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Concentration and deviations of Lipschitz functions

    Lemma

    The probability µ satisfies the concentration property with the profile α if andonly if for all 1-Lipschitz function f the following deviation inequality holds

    µ(f > mµ(f ) + r) ≤ α(r), ∀r ≥ 0,

    where mµ(f ) is a median of f under µ.

    (cf Lecture 1)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Concentration and inf-convolution.

    For all t > 0, and all f : Rd → R bounded from below, define

    Qt f (x) = infy∈Rd

    {f (y) +

    1

    t‖x − y‖2

    }, x ∈ Rd .

    Lemma

    The probability µ satisfies the concentration property with the profile α if andonly if for all function f : Rd → R bounded from below the following deviationinequality holds

    µ(Qt f > mµ(f ) + r) ≤ α(√rt), t > 0, r ≥ 0,

    where mµ(f ) is a median of f under µ.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Concentration and inf-convolution.

    For all t > 0, and all f : Rd → R bounded from below, define

    Qt f (x) = infy∈Rd

    {f (y) +

    1

    t‖x − y‖2

    }, x ∈ Rd .

    Lemma

    The probability µ satisfies the concentration property with the profile α if andonly if for all function f : Rd → R bounded from below the following deviationinequality holds

    µ(Qt f > mµ(f ) + r) ≤ α(√rt), t > 0, r ≥ 0,

    where mµ(f ) is a median of f under µ.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Concentration and inf-convolution.

    For all t > 0, and all f : (Rd)n → R bounded from below, define

    Qt f (x) = infy∈(Rd )n

    {f (y) +

    1

    t‖x − y‖2

    }, x ∈ (Rd)n.

    Lemma

    The probability µ satisfies the dimension free concentration property with theprofile α if and only if for all function f : (Rd)n → R bounded from below thefollowing deviation inequality holds

    µn(Qt f > mµn (f ) + r) ≤ α(√rt), t > 0, r ≥ 0,

    where mµn (f ) is a median of f under µ.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Proof of the lemma.

    • [Dev. inq. ⇒ Conc.]Take A such that µ(A) ≥ 1/2, and define

    fA = 0 on A and +∞ on Ac .

    Then it holds Qt fA(x) =1td2(x ,A) and mµ(fA) = 0, and so

    µ(d2( · ,A) > rt) ≤ α(√rt), ∀r ≥ 0.

    • [Conc. ⇒ Dev. Inq.]Take f and consider A = {x ∈ Rd ; f ≤ mµ(f )} (µ(A) ≥ 1/2).If y ∈ Ar , then there is some x ∈ A such that ‖x − y‖ ≤ r ; so it holds

    Qt f (y) ≤ f (x) +1

    tr 2 ≤ mµ(f ) +

    1

    tr 2.

    So

    Ar ⊂{Qt f ≤ mµ(f ) +

    r 2

    t

    }.

    And soµ(Qt f > mµ(f ) + r

    2/t) ≥ 1− α(r).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Proof of the lemma.

    • [Dev. inq. ⇒ Conc.]Take A such that µ(A) ≥ 1/2, and define

    fA = 0 on A and +∞ on Ac .Then it holds Qt fA(x) =

    1td2(x ,A) and mµ(fA) = 0, and so

    µ(d2( · ,A) > rt) ≤ α(√rt), ∀r ≥ 0.

    • [Conc. ⇒ Dev. Inq.]Take f and consider A = {x ∈ Rd ; f ≤ mµ(f )} (µ(A) ≥ 1/2).If y ∈ Ar , then there is some x ∈ A such that ‖x − y‖ ≤ r ; so it holds

    Qt f (y) ≤ f (x) +1

    tr 2 ≤ mµ(f ) +

    1

    tr 2.

    So

    Ar ⊂{Qt f ≤ mµ(f ) +

    r 2

    t

    }.

    And soµ(Qt f > mµ(f ) + r

    2/t) ≥ 1− α(r).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Proof of the lemma.

    • [Dev. inq. ⇒ Conc.]Take A such that µ(A) ≥ 1/2, and define

    fA = 0 on A and +∞ on Ac .Then it holds Qt fA(x) =

    1td2(x ,A) and mµ(fA) = 0, and so

    µ(d2( · ,A) > rt) ≤ α(√rt), ∀r ≥ 0.

    • [Conc. ⇒ Dev. Inq.]Take f and consider A = {x ∈ Rd ; f ≤ mµ(f )} (µ(A) ≥ 1/2).

    If y ∈ Ar , then there is some x ∈ A such that ‖x − y‖ ≤ r ; so it holds

    Qt f (y) ≤ f (x) +1

    tr 2 ≤ mµ(f ) +

    1

    tr 2.

    So

    Ar ⊂{Qt f ≤ mµ(f ) +

    r 2

    t

    }.

    And soµ(Qt f > mµ(f ) + r

    2/t) ≥ 1− α(r).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Proof of the lemma.

    • [Dev. inq. ⇒ Conc.]Take A such that µ(A) ≥ 1/2, and define

    fA = 0 on A and +∞ on Ac .Then it holds Qt fA(x) =

    1td2(x ,A) and mµ(fA) = 0, and so

    µ(d2( · ,A) > rt) ≤ α(√rt), ∀r ≥ 0.

    • [Conc. ⇒ Dev. Inq.]Take f and consider A = {x ∈ Rd ; f ≤ mµ(f )} (µ(A) ≥ 1/2).If y ∈ Ar , then there is some x ∈ A such that ‖x − y‖ ≤ r ; so it holds

    Qt f (y) ≤ f (x) +1

    tr 2 ≤ mµ(f ) +

    1

    tr 2.

    So

    Ar ⊂{Qt f ≤ mµ(f ) +

    r 2

    t

    }.

    And soµ(Qt f > mµ(f ) + r

    2/t) ≥ 1− α(r).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Proof of the lemma.

    • [Dev. inq. ⇒ Conc.]Take A such that µ(A) ≥ 1/2, and define

    fA = 0 on A and +∞ on Ac .Then it holds Qt fA(x) =

    1td2(x ,A) and mµ(fA) = 0, and so

    µ(d2( · ,A) > rt) ≤ α(√rt), ∀r ≥ 0.

    • [Conc. ⇒ Dev. Inq.]Take f and consider A = {x ∈ Rd ; f ≤ mµ(f )} (µ(A) ≥ 1/2).If y ∈ Ar , then there is some x ∈ A such that ‖x − y‖ ≤ r ; so it holds

    Qt f (y) ≤ f (x) +1

    tr 2 ≤ mµ(f ) +

    1

    tr 2.

    So

    Ar ⊂{Qt f ≤ mµ(f ) +

    r 2

    t

    }.

    And soµ(Qt f > mµ(f ) + r

    2/t) ≥ 1− α(r).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    By assumptions, it holds

    µn (Qt f > mµn (f ) + r) ≤ α(√rt), ∀n, ∀f ,∀r , t > 0.

    Apply this inequality with

    fn(x) = h(x1) + h(x2) + · · ·+ h(xn),

    where∫h dµ = 0. It holds

    Qt fn(x) = Qth(x1) + Qth(x2) + · · ·+ Qth(xn).

    Then chooser = u2

    √n and t = 1/

    √n

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    By assumptions, it holds

    µn (Qt f > mµn (f ) + r) ≤ α(√rt), ∀n, ∀f ,∀r , t > 0.

    Apply this inequality with

    fn(x) = h(x1) + h(x2) + · · ·+ h(xn),

    where∫h dµ = 0.

    It holds

    Qt fn(x) = Qth(x1) + Qth(x2) + · · ·+ Qth(xn).

    Then chooser = u2

    √n and t = 1/

    √n

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    By assumptions, it holds

    µn (Qt f > mµn (f ) + r) ≤ α(√rt), ∀n, ∀f ,∀r , t > 0.

    Apply this inequality with

    fn(x) = h(x1) + h(x2) + · · ·+ h(xn),

    where∫h dµ = 0. It holds

    Qt fn(x) = Qth(x1) + Qth(x2) + · · ·+ Qth(xn).

    Then chooser = u2

    √n and t = 1/

    √n

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    By assumptions, it holds

    µn (Qt f > mµn (f ) + r) ≤ α(√rt), ∀n, ∀f ,∀r , t > 0.

    Apply this inequality with

    fn(x) = h(x1) + h(x2) + · · ·+ h(xn),

    where∫h dµ = 0. It holds

    Qt fn(x) = Qth(x1) + Qth(x2) + · · ·+ Qth(xn).

    Then chooser = u2

    √n and t = 1/

    √n

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    P(∑n

    i=1 Q1/√nh(Xi )√

    n> mn + u

    2

    )≤ α(u), ∀n,∀u > 0,

    where Xi are i.i.d of law µ and mn is the median of the random variable

    1√n

    n∑i=1

    h(Xi ).

    The idea is then to pass to the limit using the central limit theorem andHamilton-Jacobi equations :

    1√n

    n∑i=1

    h(Xi )⇒ N (0,Varµ(h)), so mn → 0.

    −√nE[Q1/√nh(X1)

    ]= E

    [h(X1)− Q1/√nh(X1)

    1/√n

    ]→ 1

    4E[‖∇h‖2(X1)

    ]∑n

    i=1 Q1/√nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    ⇒ N (0,Varµ(h))

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    P

    (∑ni=1 Q1/

    √nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    > mn −√nE[Q1/√nh(X1)

    ]+ u2

    )≤ α(u), ∀n, ∀u > 0,

    where Xi are i.i.d of law µ and mn is the median of the random variable

    1√n

    n∑i=1

    h(Xi ).

    The idea is then to pass to the limit using the central limit theorem andHamilton-Jacobi equations :

    1√n

    n∑i=1

    h(Xi )⇒ N (0,Varµ(h)), so mn → 0.

    −√nE[Q1/√nh(X1)

    ]= E

    [h(X1)− Q1/√nh(X1)

    1/√n

    ]→ 1

    4E[‖∇h‖2(X1)

    ]∑n

    i=1 Q1/√nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    ⇒ N (0,Varµ(h))

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    P

    (∑ni=1 Q1/

    √nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    > mn −√nE[Q1/√nh(X1)

    ]+ u2

    )≤ α(u), ∀n, ∀u > 0,

    where Xi are i.i.d of law µ and mn is the median of the random variable

    1√n

    n∑i=1

    h(Xi ).

    The idea is then to pass to the limit using the central limit theorem andHamilton-Jacobi equations :

    1√n

    n∑i=1

    h(Xi )⇒ N (0,Varµ(h)), so mn → 0.

    −√nE[Q1/√nh(X1)

    ]= E

    [h(X1)− Q1/√nh(X1)

    1/√n

    ]→ 1

    4E[‖∇h‖2(X1)

    ]∑n

    i=1 Q1/√nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    ⇒ N (0,Varµ(h))

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    P

    (∑ni=1 Q1/

    √nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    > mn −√nE[Q1/√nh(X1)

    ]+ u2

    )≤ α(u), ∀n, ∀u > 0,

    where Xi are i.i.d of law µ and mn is the median of the random variable

    1√n

    n∑i=1

    h(Xi ).

    The idea is then to pass to the limit using the central limit theorem andHamilton-Jacobi equations :

    1√n

    n∑i=1

    h(Xi )⇒ N (0,Varµ(h)), so mn → 0.

    −√nE[Q1/√nh(X1)

    ]= E

    [h(X1)− Q1/√nh(X1)

    1/√n

    ]→ 1

    4E[‖∇h‖2(X1)

    ]

    ∑ni=1 Q1/

    √nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    ⇒ N (0,Varµ(h))

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    P

    (∑ni=1 Q1/

    √nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    > mn −√nE[Q1/√nh(X1)

    ]+ u2

    )≤ α(u), ∀n, ∀u > 0,

    where Xi are i.i.d of law µ and mn is the median of the random variable

    1√n

    n∑i=1

    h(Xi ).

    The idea is then to pass to the limit using the central limit theorem andHamilton-Jacobi equations :

    1√n

    n∑i=1

    h(Xi )⇒ N (0,Varµ(h)), so mn → 0.

    −√nE[Q1/√nh(X1)

    ]= E

    [h(X1)− Q1/√nh(X1)

    1/√n

    ]→ 1

    4E[‖∇h‖2(X1)

    ]∑n

    i=1 Q1/√nh(Xi )− E

    [Q1/√nh(Xi )

    ]√n

    ⇒ N (0,Varµ(h))

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    Letting n→∞, we get

    P(Y >

    1

    4

    ∫|∇h|2 dµ+ u2

    )≤ α(u), ∀u ≥ 0,

    where Y ∼ N (0,Varµ(h)).

    Equivalently, with Φ(t) = 1√2π

    ∫ +∞t

    e−u2/2 du, it holds

    Φ

    (14

    ∫|∇h|2 dµ+ u2√Varµ(h)

    )≤ α(u)

    And so

    Φ−1

    (α(u))√

    Varµ(h) ≤1

    4

    ∫|∇h|2 dµ+ u2

    Replacing h by λh, λ > 0, and optimizing over λ yields

    Φ−1

    (α(u))√

    Varµ(h) ≤

    infλ>0

    λ

    4

    ∫|∇h|2 dµ+ u

    2

    λ

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    Letting n→∞, we get

    P(Y >

    1

    4

    ∫|∇h|2 dµ+ u2

    )≤ α(u), ∀u ≥ 0,

    where Y ∼ N (0,Varµ(h)).

    Equivalently, with Φ(t) = 1√2π

    ∫ +∞t

    e−u2/2 du, it holds

    Φ

    (14

    ∫|∇h|2 dµ+ u2√Varµ(h)

    )≤ α(u)

    And so

    Φ−1

    (α(u))√

    Varµ(h) ≤1

    4

    ∫|∇h|2 dµ+ u2

    Replacing h by λh, λ > 0, and optimizing over λ yields

    Φ−1

    (α(u))√

    Varµ(h) ≤ infλ>0

    λ

    4

    ∫|∇h|2 dµ+ u

    2

    λ

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    Letting n→∞, we get

    P(Y >

    1

    4

    ∫|∇h|2 dµ+ u2

    )≤ α(u), ∀u ≥ 0,

    where Y ∼ N (0,Varµ(h)).

    Equivalently, with Φ(t) = 1√2π

    ∫ +∞t

    e−u2/2 du, it holds

    Φ

    (14

    ∫|∇h|2 dµ+ u2√Varµ(h)

    )≤ α(u)

    And so

    Φ−1

    (α(u))√

    Varµ(h) ≤1

    4

    ∫|∇h|2 dµ+ u2

    Replacing h by λh, λ > 0, and optimizing over λ yields

    Φ−1

    (α(u))√

    Varµ(h) ≤

    √u2∫|∇h|2 dµ

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Sketch of proof

    Letting n→∞, we get

    P(Y >

    1

    4

    ∫|∇h|2 dµ+ u2

    )≤ α(u), ∀u ≥ 0,

    where Y ∼ N (0,Varµ(h)).

    Equivalently, with Φ(t) = 1√2π

    ∫ +∞t

    e−u2/2 du, it holds

    Φ

    (14

    ∫|∇h|2 dµ+ u2√Varµ(h)

    )≤ α(u)

    And so

    Φ−1

    (α(u))√

    Varµ(h) ≤1

    4

    ∫|∇h|2 dµ+ u2

    Replacing h by λh, λ > 0, and optimizing over λ yields

    Φ−1

    (α(u))√

    Varµ(h) ≤

    √u2∫|∇h|2 dµ

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    III.1 Bounds on the deficit in the log-Sobolevinequality for the standard Gaussian.

    Joint work with S. Bobkov, C. Roberto, P-M Samson.

    Independent work by E. Indrei and D. Marcon.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Log-Sobolev for the standard Gaussian measure

    In all what follows γd denotes the standard Gaussian measure on Rd

    γd(dx) =1

    (2π)n/2e−

    12‖x‖2 dx

    Theorem.

    For all f : Rd → R sufficiently smooth, it holds

    Entγd (f2) ≤ 2

    ∫‖∇f ‖2(x) γd(dx)

    Equivalently, for all probability measure ν on Rd , with a smooth density

    H(ν|γd) ≤1

    2I(ν|γd).

    We recall that if ν = hγd ,

    H(ν|γd) =∫

    h(x) log(h)(x) γd(dx) and I(ν|γd) =∫‖∇h‖2

    h(x) γd(dx).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Equality cases

    It is easy to check that for all a ∈ Rd , the standard Gaussian measure withmean a, denoted by γad , is an equality case in LSI for γd

    Theorem. [Carlen ’91]

    The only equality cases in LSI for γd are the Gaussian measures γad , a ∈ Rd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Equality cases

    It is easy to check that for all a ∈ Rd , the standard Gaussian measure withmean a, denoted by γad , is an equality case in LSI for γd

    Theorem. [Carlen ’91]

    The only equality cases in LSI for γd are the Gaussian measures γad , a ∈ Rd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Deficit in LSI

    We denote by δ(ν) the deficit in LSI for γd :

    δ(ν) =1

    2I(ν|γd)− H(ν|γd) ≥ 0

    Carlen’s result can be restated as follows

    (δ(ν) = 0)⇒ (∃a ∈ Rd , ν = γad)

    We are interested in improving this statement in the following way :

    “If δ(ν) is small then ν is close to γad for some a.”

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Deficit in LSI

    We denote by δ(ν) the deficit in LSI for γd :

    δ(ν) =1

    2I(ν|γd)− H(ν|γd) ≥ 0

    Carlen’s result can be restated as follows

    (δ(ν) = 0)⇒ (∃a ∈ Rd , ν = γad)

    We are interested in improving this statement in the following way :

    “If δ(ν) is small then ν is close to γad for some a.”

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Deficit in LSI

    We denote by δ(ν) the deficit in LSI for γd :

    δ(ν) =1

    2I(ν|γd)− H(ν|γd) ≥ 0

    Carlen’s result can be restated as follows

    (δ(ν) = 0)⇒ (∃a ∈ Rd , ν = γad)

    We are interested in improving this statement in the following way :

    “If δ(ν) is small then ν is close to γad for some a.”

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Quantitative form of functional inequalities

    N. Fusco, F. Maggi, and A. Pratelli ’08 quantitative isoperimetric inequality.

    A. Figalli, F. Maggi, and A. Pratelli ’10 quantitative isoperimetric inequality via mass transport.

    A. Cianchi, N. Fusco, F. Maggi, and A. Pratelli ’11 / Eldan ’13 / MosselNeeman ’13

    quantitative isoperimetric inequality in Gauss space.

    . . . many others . . .

    Independent work by E. Indrei and D. Marcon on the deficit in LSI.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Carlen’s bound on the deficit

    Define the Fourier transform F operator on L1(Rd , dx) by

    F(g)(v) =∫

    Rde2iπv·ug(u) du

    Define also the isometric operator T : L2(Rd , γd)→ L2(Rd , dx)

    T (f )(u) = 2d/4e−π‖u‖2

    f (2√πu)

    and then consider

    W(f )(v) = T−1 ◦ F ◦ T (f )(v)

    =1

    2dπd/2e

    14‖v‖2

    ∫e

    i2v·ue−

    14‖u‖2 f (u) du

    Theorem. [Carlen ’91]

    For all f : Rd → R sufficiently smooth

    Entγd (|f |2) + Entγd (|W(f )|

    2) ≤ 2∫|∇f |2 dγd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Carlen’s bound on the deficit

    Define the Fourier transform F operator on L1(Rd , dx) by

    F(g)(v) =∫

    Rde2iπv·ug(u) du

    Define also the isometric operator T : L2(Rd , γd)→ L2(Rd , dx)

    T (f )(u) = 2d/4e−π‖u‖2

    f (2√πu)

    and then consider

    W(f )(v) = T−1 ◦ F ◦ T (f )(v)

    =1

    2dπd/2e

    14‖v‖2

    ∫e

    i2v·ue−

    14‖u‖2 f (u) du

    Theorem. [Carlen ’91]

    For all f : Rd → R sufficiently smooth

    Entγd (|f |2) + Entγd (|W(f )|

    2) ≤ 2∫|∇f |2 dγd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Carlen’s bound on the deficit

    Define the Fourier transform F operator on L1(Rd , dx) by

    F(g)(v) =∫

    Rde2iπv·ug(u) du

    Define also the isometric operator T : L2(Rd , γd)→ L2(Rd , dx)

    T (f )(u) = 2d/4e−π‖u‖2

    f (2√πu)

    and then consider

    W(f )(v) = T−1 ◦ F ◦ T (f )(v)

    =1

    2dπd/2e

    14‖v‖2

    ∫e

    i2v·ue−

    14‖u‖2 f (u) du

    Theorem. [Carlen ’91]

    For all f : Rd → R sufficiently smooth

    Entγd (|f |2) + Entγd (|W(f )|

    2) ≤ 2∫|∇f |2 dγd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Carlen’s bound on the deficit

    Define the Fourier transform F operator on L1(Rd , dx) by

    F(g)(v) =∫

    Rde2iπv·ug(u) du

    Define also the isometric operator T : L2(Rd , γd)→ L2(Rd , dx)

    T (f )(u) = 2d/4e−π‖u‖2

    f (2√πu)

    and then consider

    W(f )(v) = T−1 ◦ F ◦ T (f )(v)

    =1

    2dπd/2e

    14‖v‖2

    ∫e

    i2v·ue−

    14‖u‖2 f (u) du

    Theorem. [Carlen ’91]

    For all f : Rd → R sufficiently smooth

    Entγd (|f |2) + Entγd (|W(f )|

    2) ≤ 2∫|∇f |2 dγd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Carlen’s bound on the deficit

    Define the Fourier transform F operator on L1(Rd , dx) by

    F(g)(v) =∫

    Rde2iπv·ug(u) du

    Define also the isometric operator T : L2(Rd , γd)→ L2(Rd , dx)

    T (f )(u) = 2d/4e−π‖u‖2

    f (2√πu)

    and then consider

    W(f )(v) = T−1 ◦ F ◦ T (f )(v)

    =1

    2dπd/2e

    14‖v‖2

    ∫e

    i2v·ue−

    14‖u‖2 f (u) du

    Theorem. [Carlen ’91]

    For all f : Rd → R sufficiently smooth

    Entγd (|f |2) + Entγd (|W(f )|

    2) ≤ 2∫|∇f |2 dγd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    One word on the proof of Carlen’s result

    Theorem. [Beckner]

    If f ∈ Lp(Rd , dx), with p ∈ [1, 2], then

    ‖F(f )‖q ≤ Cp‖f ‖p,

    where 1/p + 1/q = 1 and Cp = p1p / q

    1q .

    Differentiating at p = 2 yields to

    Theorem. Beckner-Hirschman uncertainty principle

    For all f ∈ L2(Rd , dx), with ‖f ‖2 = 1

    h(|f |2) + h(|F(f )|2) ≥ d(1− log 2)

    where h denotes the Shannon entropy :

    h(ρ) = −∫ρ log ρ dx , ∀ρ : Rd → (0,∞),

    ∫ρ dx = 1.

    Translating the Beckner-Hirschman uncertainty principle to the standardGaussian yields to Carlen’s result.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    One word on the proof of Carlen’s result

    Theorem. [Beckner]

    If f ∈ Lp(Rd , dx), with p ∈ [1, 2], then

    ‖F(f )‖q ≤ Cp‖f ‖p,

    where 1/p + 1/q = 1 and Cp = p1p / q

    1q .

    Differentiating at p = 2 yields to

    Theorem. Beckner-Hirschman uncertainty principle

    For all f ∈ L2(Rd , dx), with ‖f ‖2 = 1

    h(|f |2) + h(|F(f )|2) ≥ d(1− log 2)

    where h denotes the Shannon entropy :

    h(ρ) = −∫ρ log ρ dx , ∀ρ : Rd → (0,∞),

    ∫ρ dx = 1.

    Translating the Beckner-Hirschman uncertainty principle to the standardGaussian yields to Carlen’s result.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    One word on the proof of Carlen’s result

    Theorem. [Beckner]

    If f ∈ Lp(Rd , dx), with p ∈ [1, 2], then

    ‖F(f )‖q ≤ Cp‖f ‖p,

    where 1/p + 1/q = 1 and Cp = p1p / q

    1q .

    Differentiating at p = 2 yields to

    Theorem. Beckner-Hirschman uncertainty principle

    For all f ∈ L2(Rd , dx), with ‖f ‖2 = 1

    h(|f |2) + h(|F(f )|2) ≥ d(1− log 2)

    where h denotes the Shannon entropy :

    h(ρ) = −∫ρ log ρ dx , ∀ρ : Rd → (0,∞),

    ∫ρ dx = 1.

    Translating the Beckner-Hirschman uncertainty principle to the standardGaussian yields to Carlen’s result.Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Bounding the deficit in terms of probability distances

    We want to bound the deficit δ from below in terms of probability distanceslike for instance W2.

    The following centering operation will play a role in the sequel :

    Definition.

    If ν is a probability measure on Rd , the probability ν̄ is defined as follows :

    it is the law of the random vector X defined by

    X = (X 1, . . . ,X d) with X i = Xi − E[Xi |X1, . . . ,Xi−1],

    where X is a random vector of law ν.

    Note in particular that X 1 = X1 − E[X1] and that (X i )i≤d is a sequence ofmartingale increments.

    Moreover, if X is unconditional then X = X .Recall that X is unconditional if X has the same law as (ε1X1, . . . , εdXd) forany choice of εi = ±1.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Bounding the deficit in terms of probability distances

    We want to bound the deficit δ from below in terms of probability distanceslike for instance W2.

    The following centering operation will play a role in the sequel :

    Definition.

    If ν is a probability measure on Rd , the probability ν̄ is defined as follows :

    it is the law of the random vector X defined by

    X = (X 1, . . . ,X d) with X i = Xi − E[Xi |X1, . . . ,Xi−1],

    where X is a random vector of law ν.

    Note in particular that X 1 = X1 − E[X1] and that (X i )i≤d is a sequence ofmartingale increments.

    Moreover, if X is unconditional then X = X .Recall that X is unconditional if X has the same law as (ε1X1, . . . , εdXd) forany choice of εi = ±1.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Bounding the deficit in terms of probability distances

    We want to bound the deficit δ from below in terms of probability distanceslike for instance W2.

    The following centering operation will play a role in the sequel :

    Definition.

    If ν is a probability measure on Rd , the probability ν̄ is defined as follows :

    it is the law of the random vector X defined by

    X = (X 1, . . . ,X d) with X i = Xi − E[Xi |X1, . . . ,Xi−1],

    where X is a random vector of law ν.

    Note in particular that X 1 = X1 − E[X1] and that (X i )i≤d is a sequence ofmartingale increments.

    Moreover, if X is unconditional then X = X .Recall that X is unconditional if X has the same law as (ε1X1, . . . , εdXd) forany choice of εi = ±1.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Bounding the deficit in terms of probability distances

    We want to bound the deficit δ from below in terms of probability distanceslike for instance W2.

    The following centering operation will play a role in the sequel :

    Definition.

    If ν is a probability measure on Rd , the probability ν̄ is defined as follows :

    it is the law of the random vector X defined by

    X = (X 1, . . . ,X d) with X i = Xi − E[Xi |X1, . . . ,Xi−1],

    where X is a random vector of law ν.

    Note in particular that X 1 = X1 − E[X1] and that (X i )i≤d is a sequence ofmartingale increments.

    Moreover, if X is unconditional then X = X .Recall that X is unconditional if X has the same law as (ε1X1, . . . , εdXd) forany choice of εi = ±1.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Bounding the deficit in terms of probability distances

    Theorem 1.

    For all probability ν on Rd ,

    δ(ν) ≥ c T2(ν̄, γd)

    H(ν̄|γd),

    where c is a universal constant and T is the optimal transport cost defined by

    T (ν, µ) = infX∼ν Y∼µ

    E

    [d∑

    i=1

    ∆(Xi − Yi )

    ], where ∆(t) = |t|−log(1+|t|).

    Theorem 2.

    If ν is a probability measure on Rd with a smooth density of the form e−V suchthat for some ε > 0, ∂2i V ≥ ε for all i ∈ {1, . . . , d}, then

    δ(ν) ≥ c min(1; ε)W 22 (ν̄, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Bounding the deficit in terms of probability distances

    Theorem 1.

    For all probability ν on Rd ,

    δ(ν) ≥ c T2(ν̄, γd)

    H(ν̄|γd),

    where c is a universal constant and T is the optimal transport cost defined by

    T (ν, µ) = infX∼ν Y∼µ

    E

    [d∑

    i=1

    ∆(Xi − Yi )

    ], where ∆(t) = |t|−log(1+|t|).

    Theorem 2.

    If ν is a probability measure on Rd with a smooth density of the form e−V suchthat for some ε > 0, ∂2i V ≥ ε for all i ∈ {1, . . . , d}, then

    δ(ν) ≥ c min(1; ε)W 22 (ν̄, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recovering equality cases

    It is possible to recover the equality cases using Theorem 1.

    Notation. If σ ∈ Sn (the symmetric group) and ν is the law of a random vectorX define νσ as the law of Xσ = (Xσ(1), . . . ,Xσ(n)).

    Note that δ(ν) = δ(νσ) for all σ ∈ Sn.

    Therefore the conclusion of Theorem 1 self improves into

    δ(ν) ≥ supσ∈Sn

    T 2(νσ, γd)H(νσ|γd)

    If δ(ν) = 0, then necessarily νσ = γd for all σ.

    . . . Identifying the densities yields to ν = γad , with a =∫x dν . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recovering equality cases

    It is possible to recover the equality cases using Theorem 1.

    Notation. If σ ∈ Sn (the symmetric group) and ν is the law of a random vectorX define νσ as the law of Xσ = (Xσ(1), . . . ,Xσ(n)).

    Note that δ(ν) = δ(νσ) for all σ ∈ Sn.

    Therefore the conclusion of Theorem 1 self improves into

    δ(ν) ≥ supσ∈Sn

    T 2(νσ, γd)H(νσ|γd)

    If δ(ν) = 0, then necessarily νσ = γd for all σ.

    . . . Identifying the densities yields to ν = γad , with a =∫x dν . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recovering equality cases

    It is possible to recover the equality cases using Theorem 1.

    Notation. If σ ∈ Sn (the symmetric group) and ν is the law of a random vectorX define νσ as the law of Xσ = (Xσ(1), . . . ,Xσ(n)).

    Note that δ(ν) = δ(νσ) for all σ ∈ Sn.

    Therefore the conclusion of Theorem 1 self improves into

    δ(ν) ≥ supσ∈Sn

    T 2(νσ, γd)H(νσ|γd)

    If δ(ν) = 0, then necessarily νσ = γd for all σ.

    . . . Identifying the densities yields to ν = γad , with a =∫x dν . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recovering equality cases

    It is possible to recover the equality cases using Theorem 1.

    Notation. If σ ∈ Sn (the symmetric group) and ν is the law of a random vectorX define νσ as the law of Xσ = (Xσ(1), . . . ,Xσ(n)).

    Note that δ(ν) = δ(νσ) for all σ ∈ Sn.

    Therefore the conclusion of Theorem 1 self improves into

    δ(ν) ≥ supσ∈Sn

    T 2(νσ, γd)H(νσ|γd)

    If δ(ν) = 0, then necessarily νσ = γd for all σ.

    . . . Identifying the densities yields to ν = γad , with a =∫x dν . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recovering equality cases

    It is possible to recover the equality cases using Theorem 1.

    Notation. If σ ∈ Sn (the symmetric group) and ν is the law of a random vectorX define νσ as the law of Xσ = (Xσ(1), . . . ,Xσ(n)).

    Note that δ(ν) = δ(νσ) for all σ ∈ Sn.

    Therefore the conclusion of Theorem 1 self improves into

    δ(ν) ≥ supσ∈Sn

    T 2(νσ, γd)H(νσ|γd)

    If δ(ν) = 0, then necessarily νσ = γd for all σ.

    . . . Identifying the densities yields to ν = γad , with a =∫x dν . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recovering equality cases

    It is possible to recover the equality cases using Theorem 1.

    Notation. If σ ∈ Sn (the symmetric group) and ν is the law of a random vectorX define νσ as the law of Xσ = (Xσ(1), . . . ,Xσ(n)).

    Note that δ(ν) = δ(νσ) for all σ ∈ Sn.

    Therefore the conclusion of Theorem 1 self improves into

    δ(ν) ≥ supσ∈Sn

    T 2(νσ, γd)H(νσ|γd)

    If δ(ν) = 0, then necessarily νσ = γd for all σ.

    . . . Identifying the densities yields to ν = γad , with a =∫x dν . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H

    ≥ 12I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2

    Because of T2 and LSI, it holds W ≤√

    2H ≤√I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2

    =1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2

    ≥ 14

    δ2T2H

    with δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 1 : Relate the deficit in LSI to the deficit in T2.

    Recall the HWI inequality (Lecture II):

    H(ν0|γd) ≤ H(ν1|γd) + W2(ν0, ν1)√

    I(ν0|γd)−1

    2W 22 (ν0, γd)

    Take ν1 = γd and ν0 = ν and write H = H(ν|γd), W = W2(ν, γd) andI = I(ν|γd) :

    H ≤W√I − 1

    2W 2.

    Therefore,

    δ =1

    2I − H ≥ 1

    2I −W

    √I +

    1

    2W 2

    =1

    2

    (√I −W

    )2Because of T2 and LSI, it holds W ≤

    √2H ≤

    √I and so

    δ ≥ 12

    (√2H −W

    )2=

    1

    2

    (2H −W 2

    )2(√2H + W

    )2 ≥ 14 δ2T2Hwith δT2 (ν) = deficit in T2 = 2 H(ν|γd)−W 22 (ν, γd).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 2 : Bound the deficit in T2 in dimension 1

    Theorem. [Barthe-Kolesnikov ’08]

    For all probability measure ν on R with mean 0, it holds

    2 H(ν|γ)−W 22 (ν, γ) ≥ cT (ν, γ).

    Proof. See later.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 3 : Bound δ in dimension 1

    Inserting Barthe-Kolesnikov bound into the lower bound of δ yields to :

    δ(ν) ≥ c T2(ν, γ)

    H(ν|γ) ,

    for all ν ∈ P(R) such that∫x ν(dx) = 0.

    The deficit δ is translation invariant :if νa is the image of ν under the map x 7→ x − a, it holds

    δ(ν) = δ(νa).

    Therefore,

    δ(ν) ≥ c T2(νa, γ)

    H(νa|γ) , with a =∫

    x ν(dx)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 3 : Bound δ in dimension 1

    Inserting Barthe-Kolesnikov bound into the lower bound of δ yields to :

    δ(ν) ≥ c T2(ν, γ)

    H(ν|γ) ,

    for all ν ∈ P(R) such that∫x ν(dx) = 0.

    The deficit δ is translation invariant :if νa is the image of ν under the map x 7→ x − a, it holds

    δ(ν) = δ(νa).

    Therefore,

    δ(ν) ≥ c T2(νa, γ)

    H(νa|γ) , with a =∫

    x ν(dx)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 3 : Bound δ in dimension 1

    Inserting Barthe-Kolesnikov bound into the lower bound of δ yields to :

    δ(ν) ≥ c T2(ν, γ)

    H(ν|γ) ,

    for all ν ∈ P(R) such that∫x ν(dx) = 0.

    The deficit δ is translation invariant :if νa is the image of ν under the map x 7→ x − a, it holds

    δ(ν) = δ(νa).

    Therefore,

    δ(ν) ≥ c T2(νa, γ)

    H(νa|γ) , with a =∫

    x ν(dx)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 3 : Bound δ in dimension 1

    Inserting Barthe-Kolesnikov bound into the lower bound of δ yields to :

    δ(ν) ≥ c T2(ν, γ)

    H(ν|γ) ,

    for all ν ∈ P(R) such that∫x ν(dx) = 0.

    The deficit δ is translation invariant :if νa is the image of ν under the map x 7→ x − a, it holds

    δ(ν) = δ(νa).

    Therefore,

    δ(ν) ≥ c T2(ν̄, γ)

    H(ν̄|γ)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 4 : Tensorizing the deficit bound

    Recall the tensorizing formula for H, I and T :

    H(ν|µ1 × µ2) = H(ν1|µ1) +∫

    H(ν2( · |x1) |µ2) ν1(dx1)

    I(ν|µ1 × µ2) ≥ I(ν1|µ1) +∫

    I(ν2( · |x1) |µ2) ν1(dx1)

    T (ν|µ1 × µ2) ≤ T (ν1|µ1) +∫T (ν2( · |x1) |µ2) ν1(dx1),

    where

    µ1 ∈ P(Rd1 ), µ2 ∈ P(Rd2 )ν ∈ P(Rd1+d2 ) and

    ν(dx1dx2) = ν2(dx2|x1)ν1(dx1)

    Consequence: Taking µ1 = γd1 , µ2 = γd2 , one gets

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |x1)) ν1(dx1).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 4 : Tensorizing the deficit bound

    Recall the tensorizing formula for H, I and T :

    H(ν|µ1 × µ2) = H(ν1|µ1) +∫

    H(ν2( · |x1) |µ2) ν1(dx1)

    I(ν|µ1 × µ2) ≥ I(ν1|µ1) +∫

    I(ν2( · |x1) |µ2) ν1(dx1)

    T (ν|µ1 × µ2) ≤ T (ν1|µ1) +∫T (ν2( · |x1) |µ2) ν1(dx1),

    where

    µ1 ∈ P(Rd1 ), µ2 ∈ P(Rd2 )ν ∈ P(Rd1+d2 ) and

    ν(dx1dx2) = ν2(dx2|x1)ν1(dx1)

    Consequence: Taking µ1 = γd1 , µ2 = γd2 , one gets

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |x1)) ν1(dx1).

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)≥ c T

    2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)≥ c T

    2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)≥ c T

    2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)≥ c T

    2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)≥ c T

    2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Step 5 : Tensorizing the deficit bound

    By induction : Assume that δ(ν) ≥ c T2(ν̄,γd−1)

    H(ν̄|γd−1), for all ν ∈ P(Rd−1).

    Take ν ∈ P(Rd) ; write z1 = (x1, . . . , xd−1) and z2 = xd .- let ν1 be the marginal on the first d − 1 coordinates,- let ν2( · |z1) be the conditional distribution of z2 knowing z1.

    δ(ν) ≥ δ(ν1) +∫δ(ν2( · |z1)) ν1(dz1)

    ≥ c T2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    ∫ T 2 (ν2( · |z1), γ)H(ν2( · |z1) | γ

    ) ν1(dz1)≥ c T

    2(ν1, γd−1)

    H(ν1 | γd−1)+ c

    (∫T (ν2( · |z1), γ) ν1(dz1)

    )2∫

    H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c

    (T (ν1, γd−1) +

    ∫T (ν2( · |z1), γ) ν1(dz1)

    )2H(ν1 | γd−1) +

    ∫H(ν2( · |z1) | γ) ν1(dz1)

    ≥ c T2(ν, γd)

    H(ν|γd)Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    III.2 Transport proofs of variance estimates forlog-concave probabilities.

    Joint work with D. Cordero-Erausquin.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Log-concave random vectors

    Recall that a probability µ on Rd is log-concave if it has a density with respectto Lebesgue of the form e−V with a convex function V : Rd → R ∪ {+∞}.

    Basic example : µ uniform probability on a (bounded) convex domain.

    A random vector X is said log-concave if its law is log-concave.It is called isotropic if E[X ] = 0 and E[XiXj ] = δij .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    The KLS and Variance conjectures

    Recall the KLS conjecture.

    KLS conjecture

    There exists a universal constant c > 0 such that any isotropic log-concaverandom vector X satisfies

    Var(f (X )) ≤ cE[‖∇f (X )‖2], for all f smooth enough.

    Considering the special case f (x) = ‖x‖2, the KLS conjecture implies thevariance conjecture

    Variance conjecture

    There exists a universal constant c > 0 such that any d-dimensional isotropiclog-concave random vector X satisfies

    Var(‖X‖2) ≤ cd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    The KLS and Variance conjectures

    Recall the KLS conjecture.

    KLS conjecture

    There exists a universal constant c > 0 such that any isotropic log-concaverandom vector X satisfies

    Var(f (X )) ≤ cE[‖∇f (X )‖2], for all f smooth enough.

    Considering the special case f (x) = ‖x‖2, the KLS conjecture implies thevariance conjecture

    Variance conjecture

    There exists a universal constant c > 0 such that any d-dimensional isotropiclog-concave random vector X satisfies

    Var(‖X‖2) ≤ cd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    The KLS and Variance conjectures

    Recall the KLS conjecture.

    KLS conjecture

    There exists a universal constant c > 0 such that any isotropic log-concaverandom vector X satisfies

    Var(f (X )) ≤ cE[‖∇f (X )‖2], for all f smooth enough.

    Considering the special case f (x) = ‖x‖2, the KLS conjecture implies thevariance conjecture

    Variance conjecture

    There exists a universal constant c > 0 such that any d-dimensional isotropiclog-concave random vector X satisfies

    Var(‖X‖2) ≤ cd .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Recent results

    Klartag ’09 The variance conjecture is true for unconditional isotropiclog-concave random vectors. Alternative proof byBarthe-Cordero ’13 using direct L2 methods.

    Guédon-Milman ’11 For any isotropic log-concave r.v. X ,

    Var(‖X‖2) ≤ cd4/3.

    Eldan ’13 If the variance conjecture is true then KLS is true up to a log dfactor.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    C.L.T for log-concave random vectors.

    The following result is due to Klartag.

    Theorem. [Klartag ’07]

    Let X be a random vector with an isotropic log-concave distribution. Thenthere exists a subset A of Sn−1 with σn−1(A) ≥ 1− e−a

    √n such that for all

    θ ∈ A,W1(Law(X · θ), γ) ≤

    b

    nc

    where a, b, c are universal positive constants.

    Key lemma in the proof :Var(‖X‖2) = o(n2)

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Main result

    Goal : Recover Klartag’s result about unconditional random vectors usingoptimal transport tools.

    Recall that if X is a random vector, the random vector X is defined by

    X i = Xi − E[Xi |X1, . . . ,Xi−1], ∀i ∈ {1, . . . , d}.

    One denotes by Fi the σ field σ(X1, . . . ,Xi ) and F0 = {∅,Rd}.

    Theorem.

    If X is a log-concave random vector, then the random vector X satisfies thefollowing Poincaré type inequality

    Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    where c is a universal constant.

    Proof uses transport arguments.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Main result

    Goal : Recover Klartag’s result about unconditional random vectors usingoptimal transport tools.

    Recall that if X is a random vector, the random vector X is defined by

    X i = Xi − E[Xi |X1, . . . ,Xi−1], ∀i ∈ {1, . . . , d}.

    One denotes by Fi the σ field σ(X1, . . . ,Xi ) and F0 = {∅,Rd}.

    Theorem.

    If X is a log-concave random vector, then the random vector X satisfies thefollowing Poincaré type inequality

    Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    where c is a universal constant.

    Proof uses transport arguments.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    (∗) Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    When X is unconditional, then X = X and (∗) is similar to previousresults by Klartag ’13 and Barthe-Cordero-Erausquin ’13.

    The class of vectors such that X = X is strictly larger than the class ofunconditional vectors and contains less symmetric vectors.

    The random vector X is the projection in the space L2(Ω,P; Rd) of Xonto the subspace of martingale increments with respect to the filtration(Fi )0≤i≤d .The operation X → X does not preserve log-concavity.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    (∗) Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    When X is unconditional, then X = X and (∗) is similar to previousresults by Klartag ’13 and Barthe-Cordero-Erausquin ’13.

    The class of vectors such that X = X is strictly larger than the class ofunconditional vectors and contains less symmetric vectors.

    The random vector X is the projection in the space L2(Ω,P; Rd) of Xonto the subspace of martingale increments with respect to the filtration(Fi )0≤i≤d .The operation X → X does not preserve log-concavity.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    (∗) Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    When X is unconditional, then X = X and (∗) is similar to previousresults by Klartag ’13 and Barthe-Cordero-Erausquin ’13.

    The class of vectors such that X = X is strictly larger than the class ofunconditional vectors and contains less symmetric vectors.

    The random vector X is the projection in the space L2(Ω,P; Rd) of Xonto the subspace of martingale increments with respect to the filtration(Fi )0≤i≤d .The operation X → X does not preserve log-concavity.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    (∗) Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    When X is unconditional, then X = X and (∗) is similar to previousresults by Klartag ’13 and Barthe-Cordero-Erausquin ’13.

    The class of vectors such that X = X is strictly larger than the class ofunconditional vectors and contains less symmetric vectors.

    The random vector X is the projection in the space L2(Ω,P; Rd) of Xonto the subspace of martingale increments with respect to the filtration(Fi )0≤i≤d .

    The operation X → X does not preserve log-concavity.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    (∗) Var(f (X )) ≤ cd∑

    i=1

    E[E[X 2i |Fi−1] (∂i f (X ))2

    ], ∀f sufficiently smooth

    When X is unconditional, then X = X and (∗) is similar to previousresults by Klartag ’13 and Barthe-Cordero-Erausquin ’13.

    The class of vectors such that X = X is strictly larger than the class ofunconditional vectors and contains less symmetric vectors.

    The random vector X is the projection in the space L2(Ω,P; Rd) of Xonto the subspace of martingale increments with respect to the filtration(Fi )0≤i≤d .The operation X → X does not preserve log-concavity.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    Corollary.

    For any log-concave random vector X ,

    Var(‖X‖2) ≤ cd∑

    i=1

    E[X 4i ]

    ≤ c ′d∑

    i=1

    E[X 4i ] ≤ c ′′d∑

    i=1

    E[X 2i ]2

    If in addition X is isotropic, then Var(‖X‖2) ≤ cd for some universal c > 0.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    Corollary.

    For any log-concave random vector X ,

    Var(‖X‖2) ≤ cd∑

    i=1

    E[X 4i ] ≤ c ′d∑

    i=1

    E[X 4i ]

    ≤ c ′′d∑

    i=1

    E[X 2i ]2

    If in addition X is isotropic, then Var(‖X‖2) ≤ cd for some universal c > 0.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    Corollary.

    For any log-concave random vector X ,

    Var(‖X‖2) ≤ cd∑

    i=1

    E[X 4i ] ≤ c ′d∑

    i=1

    E[X 4i ] ≤ c ′′d∑

    i=1

    E[X 2i ]2

    If in addition X is isotropic, then Var(‖X‖2) ≤ cd for some universal c > 0.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Comments

    Corollary.

    For any log-concave random vector X ,

    Var(‖X‖2) ≤ cd∑

    i=1

    E[X 4i ] ≤ c ′d∑

    i=1

    E[X 4i ] ≤ c ′′d∑

    i=1

    E[X 2i ]2

    If in addition X is isotropic, then Var(‖X‖2) ≤ cd for some universal c > 0.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i). Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i). Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i). Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i).

    Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i). Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i). Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Remarks about the class of vectors such that X = X

    The class is stable under convolution.

    In dimension 2, a random vector uniformly distributed on a convex bodyC belongs to this class if and only if C has its barycenter at 0 and issymmetric with respect to the axis R× {0}.

    Here is a way to construct vectors of arbitrary large dimension belongingto this class (which are neither unconditional nor product) :

    Take X (1),X (2),X (3) three independent vectors with values in R2 suchthat X (i) = X (i). Define

    Y (1) = (0,X(1)1 ,X

    (1)2 ), Y

    (2) = (X(2)1 , 0,X

    (2)2 ) Y

    (3) = (X(3)1 ,X

    (3)2 , 0)

    and let Y = Y (1) + Y (2) + Y (3). Then Y is such that Y = Y .

    This construction can be iterated . . .

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    Idea of the proof

    Theorem. [Bobkov-Gentil-Ledoux ’01]

    A probability measure µ on Rd satisfies PI(C) for some C > 0 if and only ifthere is some D such that it satisfies the transport-entropy inequality

    T (ν, µ) ≤ H(ν|µ), ∀ν ∈ P(Rd),

    whereT (ν, µ) = inf E[min(D‖X − Y ‖;D2‖X − Y ‖2)]

    The link between C and D is quantitative.

    General idea :Poincaré type inequalities can be represented as transport-entropy inequalities.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    First tool : Knothe map

    Let µ, ν ∈ P(Rd).

    The Knothe map T dµ→ν transporting µ on ν is defined recursively as follows :

    When d = 1, the Knothe map T 1µ→ν is the monotone rearrangement map(see Lecture I) defined by

    T 1µ→ν(x) = F−1ν ◦ Fµ

    where Fµ and Fν denote the cumulative distribution functions of µ and ν.

    When d > 1, write z1 = (x1, . . . , xd−1) and z2 = xd and denote by- µ1 and ν1 the marginals of µ and ν on the first d − 1 coordinates- µ2( · |z1) and ν2( · |z1) the conditional laws of z2 knowing z1.

    Then, letting T d−1 := T d−1µ1→ν1 , one defines Tdµ→ν as follows

    T dµ→ν(z1, z2) =(T d−1(z1), T

    1µ2( · |z1)→ν2( · |Td−1(z1))(z2)

    )

    By construction, T dµ→ν is a triangular map.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    First tool : Knothe map

    Let µ, ν ∈ P(Rd).

    The Knothe map T dµ→ν transporting µ on ν is defined recursively as follows :

    When d = 1, the Knothe map T 1µ→ν is the monotone rearrangement map(see Lecture I) defined by

    T 1µ→ν(x) = F−1ν ◦ Fµ

    where Fµ and Fν denote the cumulative distribution functions of µ and ν.

    When d > 1, write z1 = (x1, . . . , xd−1) and z2 = xd and denote by- µ1 and ν1 the marginals of µ and ν on the first d − 1 coordinates- µ2( · |z1) and ν2( · |z1) the conditional laws of z2 knowing z1.

    Then, letting T d−1 := T d−1µ1→ν1 , one defines Tdµ→ν as follows

    T dµ→ν(z1, z2) =(T d−1(z1), T

    1µ2( · |z1)→ν2( · |Td−1(z1))(z2)

    )

    By construction, T dµ→ν is a triangular map.

    Nathaël Gozlan Applications of optimal transport in functional inequalities

  • Poincaré inequality and dimension free concentrationBounds on the deficit in the log-Sobolev inequality

    Variance estimates for log-concave probabilities

    First tool : Knothe map

    Let µ, ν ∈ P(Rd).

    The Knothe map T dµ→ν transporting µ on ν is defined recursively as follows :

    When d = 1, the Knothe map T 1µ→ν is the monotone rearrangement map(see Lecture I) defined by

    T 1µ→ν(x) = F−1ν ◦ Fµ

    where Fµ and Fν denote the cumulative distribution functions of µ and ν.

    When d > 1, write z1 = (x1, . . . , xd−1) and z2 = xd and denote by- µ1 and ν1 the marginals of µ and ν on the first d − 1 coordinates- µ2( · |z1) and ν2(