Top Banner
Sparse recovery under weak moment assumptions Guillaume Lecu´ e CNRS, CREST, ENSAE August 2016 - journ´ ee MAS Grenoble joint works with Sjoerd Dirksen, Shahar Mendelson and Holger Rauhut
142

mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

May 22, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Sparse recovery under weak moment assumptions

Guillaume Lecue

CNRS, CREST, ENSAE

August 2016 - journee MAS Grenoble

joint works with Sjoerd Dirksen, Shahar Mendelson and Holger Rauhut

Page 2: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

2 / 41

Page 3: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

X1, . . . ,Xn : n measurements vectors

‖x‖0 = |{j : xj 6= 0}| ≤ s : s-sparse

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

2 / 41

Page 4: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

X1, . . . ,Xn : n measurements vectors

‖x‖0 = |{j : xj 6= 0}| ≤ s : s-sparse

aim : exact reconstruction of anys-sparse vector x from

(⟨Xi , x

⟩)ni=1

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

2 / 41

Page 5: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

X1, . . . ,Xn : n measurements vectors

‖x‖0 = |{j : xj 6= 0}| ≤ s : s-sparse

aim : exact reconstruction of anys-sparse vector x from

(⟨Xi , x

⟩)ni=1

Questions :

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

2 / 41

Page 6: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

X1, . . . ,Xn : n measurements vectors

‖x‖0 = |{j : xj 6= 0}| ≤ s : s-sparse

aim : exact reconstruction of anys-sparse vector x from

(⟨Xi , x

⟩)ni=1

Questions :

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

2 / 41

Page 7: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

X1, . . . ,Xn : n measurements vectors

‖x‖0 = |{j : xj 6= 0}| ≤ s : s-sparse

aim : exact reconstruction of anys-sparse vector x from

(⟨Xi , x

⟩)ni=1

Questions :

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

2 / 41

Page 8: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact reconstruction from few linear measurements (Compressed sensing)

X1 ∈ Rp

X2

X3

Xn

x ∈ Rp

X1, . . . ,Xn : n measurements vectors

‖x‖0 = |{j : xj 6= 0}| ≤ s : s-sparse

aim : exact reconstruction of anys-sparse vector x from

(⟨Xi , x

⟩)ni=1

Questions :

what is the minimal number of measurements n ?

how to choose the measurement vectors X1, . . . ,Xn ?

p : space dimension, n : number of measurements, s : sparsity parameter.

2 / 41

Page 9: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

`0-minimization is NP-hard

`0-minimization procedure

minimize ‖t‖0 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

argmin(‖t‖0 : Γt = Γx

)= {x} for any ‖x‖0 ≤ s.

X>1 /√

n

· · ·

X>n /√

n

Γ = ΓI

|I | = 2s

kerΓI = {0},∀|I | = 2s

1 n ≥ 2s is the minimal number of measurements.2 Γ = the 2s first Fourier basis vectors3 Natarajan, 1995 : `0-minimization is NP-hard (solves the “exact

cover by 3-sets problem”).

3 / 41

Page 10: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

`0-minimization is NP-hard

`0-minimization procedure

minimize ‖t‖0 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

argmin(‖t‖0 : Γt = Γx

)= {x} for any ‖x‖0 ≤ s.

X>1 /√

n

· · ·

X>n /√

n

Γ = ΓI

|I | = 2s

kerΓI = {0},∀|I | = 2s

1 n ≥ 2s is the minimal number of measurements.2 Γ = the 2s first Fourier basis vectors3 Natarajan, 1995 : `0-minimization is NP-hard (solves the “exact

cover by 3-sets problem”).

3 / 41

Page 11: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

`0-minimization is NP-hard

`0-minimization procedure

minimize ‖t‖0 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

argmin(‖t‖0 : Γt = Γx

)= {x} for any ‖x‖0 ≤ s.

X>1 /√

n

· · ·

X>n /√

n

Γ = ΓI

|I | = 2s

kerΓI = {0},∀|I | = 2s

1 n ≥ 2s is the minimal number of measurements.

2 Γ = the 2s first Fourier basis vectors3 Natarajan, 1995 : `0-minimization is NP-hard (solves the “exact

cover by 3-sets problem”).

3 / 41

Page 12: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

`0-minimization is NP-hard

`0-minimization procedure

minimize ‖t‖0 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

argmin(‖t‖0 : Γt = Γx

)= {x} for any ‖x‖0 ≤ s.

X>1 /√

n

· · ·

X>n /√

n

Γ = ΓI

|I | = 2s

kerΓI = {0},∀|I | = 2s

1 n ≥ 2s is the minimal number of measurements.2 Γ = the 2s first Fourier basis vectors

3 Natarajan, 1995 : `0-minimization is NP-hard (solves the “exactcover by 3-sets problem”).

3 / 41

Page 13: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

`0-minimization is NP-hard

`0-minimization procedure

minimize ‖t‖0 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

argmin(‖t‖0 : Γt = Γx

)= {x} for any ‖x‖0 ≤ s.

X>1 /√

n

· · ·

X>n /√

n

Γ = ΓI

|I | = 2s

kerΓI = {0},∀|I | = 2s

1 n ≥ 2s is the minimal number of measurements.2 Γ = the 2s first Fourier basis vectors3 Natarajan, 1995 : `0-minimization is NP-hard (solves the “exact

cover by 3-sets problem”).3 / 41

Page 14: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

convex relaxation : `1-minimization = basis pursuit algorithm

Basis pursuit – [Logan, 1965], [Donoho, Logan, 1992], [...]

minimize ‖t‖1 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

BP can be recasted to a linear programming

Definition

We say that Γ = n−1/2∑n

i=1

⟨Xi , ·

⟩ei satisfies the exact reconstruction

property of order s when :

for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x} (ER(s))

Proposition : Γ satisfies ER(s) ⇒ n & s log(ep/s)

Question : construction of Γ satisfying ER(s) with n ∼ s log(ep/s).

4 / 41

Page 15: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

convex relaxation : `1-minimization = basis pursuit algorithm

Basis pursuit – [Logan, 1965], [Donoho, Logan, 1992], [...]

minimize ‖t‖1 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

BP can be recasted to a linear programming

Definition

We say that Γ = n−1/2∑n

i=1

⟨Xi , ·

⟩ei satisfies the exact reconstruction

property of order s when :

for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x} (ER(s))

Proposition : Γ satisfies ER(s) ⇒ n & s log(ep/s)

Question : construction of Γ satisfying ER(s) with n ∼ s log(ep/s).

4 / 41

Page 16: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

convex relaxation : `1-minimization = basis pursuit algorithm

Basis pursuit – [Logan, 1965], [Donoho, Logan, 1992], [...]

minimize ‖t‖1 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

BP can be recasted to a linear programming

Definition

We say that Γ = n−1/2∑n

i=1

⟨Xi , ·

⟩ei satisfies the exact reconstruction

property of order s when :

for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x} (ER(s))

Proposition : Γ satisfies ER(s) ⇒ n & s log(ep/s)

Question : construction of Γ satisfying ER(s) with n ∼ s log(ep/s).

4 / 41

Page 17: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

convex relaxation : `1-minimization = basis pursuit algorithm

Basis pursuit – [Logan, 1965], [Donoho, Logan, 1992], [...]

minimize ‖t‖1 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

BP can be recasted to a linear programming

Definition

We say that Γ = n−1/2∑n

i=1

⟨Xi , ·

⟩ei satisfies the exact reconstruction

property of order s when :

for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x} (ER(s))

Proposition : Γ satisfies ER(s) ⇒ n & s log(ep/s)

Question : construction of Γ satisfying ER(s) with n ∼ s log(ep/s).

4 / 41

Page 18: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

convex relaxation : `1-minimization = basis pursuit algorithm

Basis pursuit – [Logan, 1965], [Donoho, Logan, 1992], [...]

minimize ‖t‖1 subject to⟨Xi , t

⟩=⟨Xi , x

⟩, i = 1, . . . , n.

BP can be recasted to a linear programming

Definition

We say that Γ = n−1/2∑n

i=1

⟨Xi , ·

⟩ei satisfies the exact reconstruction

property of order s when :

for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x} (ER(s))

Proposition : Γ satisfies ER(s) ⇒ n & s log(ep/s)

Question : construction of Γ satisfying ER(s) with n ∼ s log(ep/s).

4 / 41

Page 19: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

caracterization of the Exact Reconstruction property

RIP(c0s) : for any ‖x‖0 ≤ c0s, 12‖x‖2 ≤ ‖Γx‖2 ≤ 3

2‖x‖2.

[Candes & Romberg & Tao , 05, 06]

⇓for any x ∈ √c1sBp

1 ∩ Sp−12 , ‖Γx‖2 > 0

[Kashin & Temlyakov , 07]

⇓NSP(s) : for any h ∈ kerΓ\{0}, |I | ≤ s, ‖hI‖1 < ‖hI c‖1

[Donoho , Elad , Huo , 01, 03]

m

ER(s) for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x}

(⇔ ΓBp1 is s − neighborly, [Donoho, 05])

(⇐ Incoherency conditions)

5 / 41

Page 20: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

caracterization of the Exact Reconstruction property

RIP(c0s) : for any ‖x‖0 ≤ c0s, 12‖x‖2 ≤ ‖Γx‖2 ≤ 3

2‖x‖2.

[Candes & Romberg & Tao , 05, 06]

⇓for any x ∈ √c1sBp

1 ∩ Sp−12 , ‖Γx‖2 > 0

[Kashin & Temlyakov , 07]

⇓NSP(s) : for any h ∈ kerΓ\{0}, |I | ≤ s, ‖hI‖1 < ‖hI c‖1

[Donoho , Elad , Huo , 01, 03]

m

ER(s) for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x}

(⇔ ΓBp1 is s − neighborly, [Donoho, 05])

(⇐ Incoherency conditions)

5 / 41

Page 21: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

caracterization of the Exact Reconstruction property

RIP(c0s) : for any ‖x‖0 ≤ c0s, 12‖x‖2 ≤ ‖Γx‖2 ≤ 3

2‖x‖2.

[Candes & Romberg & Tao , 05, 06]

⇓for any x ∈ √c1sBp

1 ∩ Sp−12 , ‖Γx‖2 > 0

[Kashin & Temlyakov , 07]

⇓NSP(s) : for any h ∈ kerΓ\{0}, |I | ≤ s, ‖hI‖1 < ‖hI c‖1

[Donoho , Elad , Huo , 01, 03]

m

ER(s) for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x}

(⇔ ΓBp1 is s − neighborly, [Donoho, 05])

(⇐ Incoherency conditions)

5 / 41

Page 22: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

caracterization of the Exact Reconstruction property

RIP(c0s) : for any ‖x‖0 ≤ c0s, 12‖x‖2 ≤ ‖Γx‖2 ≤ 3

2‖x‖2.

[Candes & Romberg & Tao , 05, 06]

⇓for any x ∈ √c1sBp

1 ∩ Sp−12 , ‖Γx‖2 > 0

[Kashin & Temlyakov , 07]

⇓NSP(s) : for any h ∈ kerΓ\{0}, |I | ≤ s, ‖hI‖1 < ‖hI c‖1

[Donoho , Elad , Huo , 01, 03]

m

ER(s) for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x}

(⇔ ΓBp1 is s − neighborly, [Donoho, 05])

(⇐ Incoherency conditions)

5 / 41

Page 23: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

caracterization of the Exact Reconstruction property

RIP(c0s) : for any ‖x‖0 ≤ c0s, 12‖x‖2 ≤ ‖Γx‖2 ≤ 3

2‖x‖2.

[Candes & Romberg & Tao , 05, 06]

⇓for any x ∈ √c1sBp

1 ∩ Sp−12 , ‖Γx‖2 > 0

[Kashin & Temlyakov , 07]

⇓NSP(s) : for any h ∈ kerΓ\{0}, |I | ≤ s, ‖hI‖1 < ‖hI c‖1

[Donoho , Elad , Huo , 01, 03]

m

ER(s) for any ‖x‖0 ≤ s, argmin(‖t‖1 : Γt = Γx

)= {x}

(⇔ ΓBp1 is s − neighborly, [Donoho, 05])

(⇐ Incoherency conditions)

5 / 41

Page 24: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Random measurements

Random matrix

So far, only random matrices satisfy ER(s) with n ∼ s log(ep/s

)(with

large probability).

Examples :

1 Independent, isotropic (i.e. E⟨X , t

⟩2= ‖t‖2

2) and subgaussian (i.e.

P[|⟨X , t

⟩| ≥ u‖t‖2] ≤ 2 exp(−c0u2)) rows : RIP(s) is satisfied when

n ∼ s log(ep/s) [Candes, Tao, Vershynin, Rudelson, Mendelson,Pajor, Tomjack-Jaegermann].

2 independent log-concave rows or independent sub-exponentialcolumns satisfy RIP(s) when n ∼ s log2(ep/s) [Adamczack, Latala,Litvak, Pajor, Tomjack-Jaegermann].

3 Structured matrices : partial Fourier matrices satisfy RIP(s)[Rudelson, Vershynin, Candes, Tao, Bourgain] when n & s log3(p).

6 / 41

Page 25: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Random measurements

Random matrix

So far, only random matrices satisfy ER(s) with n ∼ s log(ep/s

)(with

large probability).

Examples :

1 Independent, isotropic (i.e. E⟨X , t

⟩2= ‖t‖2

2) and subgaussian (i.e.

P[|⟨X , t

⟩| ≥ u‖t‖2] ≤ 2 exp(−c0u2)) rows : RIP(s) is satisfied when

n ∼ s log(ep/s) [Candes, Tao, Vershynin, Rudelson, Mendelson,Pajor, Tomjack-Jaegermann].

2 independent log-concave rows or independent sub-exponentialcolumns satisfy RIP(s) when n ∼ s log2(ep/s) [Adamczack, Latala,Litvak, Pajor, Tomjack-Jaegermann].

3 Structured matrices : partial Fourier matrices satisfy RIP(s)[Rudelson, Vershynin, Candes, Tao, Bourgain] when n & s log3(p).

6 / 41

Page 26: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Random measurements

Random matrix

So far, only random matrices satisfy ER(s) with n ∼ s log(ep/s

)(with

large probability).

Examples :

1 Independent, isotropic (i.e. E⟨X , t

⟩2= ‖t‖2

2) and subgaussian (i.e.

P[|⟨X , t

⟩| ≥ u‖t‖2] ≤ 2 exp(−c0u2)) rows : RIP(s) is satisfied when

n ∼ s log(ep/s) [Candes, Tao, Vershynin, Rudelson, Mendelson,Pajor, Tomjack-Jaegermann].

2 independent log-concave rows or independent sub-exponentialcolumns satisfy RIP(s) when n ∼ s log2(ep/s) [Adamczack, Latala,Litvak, Pajor, Tomjack-Jaegermann].

3 Structured matrices : partial Fourier matrices satisfy RIP(s)[Rudelson, Vershynin, Candes, Tao, Bourgain] when n & s log3(p).

6 / 41

Page 27: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Random measurements

Random matrix

So far, only random matrices satisfy ER(s) with n ∼ s log(ep/s

)(with

large probability).

Examples :

1 Independent, isotropic (i.e. E⟨X , t

⟩2= ‖t‖2

2) and subgaussian (i.e.

P[|⟨X , t

⟩| ≥ u‖t‖2] ≤ 2 exp(−c0u2)) rows : RIP(s) is satisfied when

n ∼ s log(ep/s) [Candes, Tao, Vershynin, Rudelson, Mendelson,Pajor, Tomjack-Jaegermann].

2 independent log-concave rows or independent sub-exponentialcolumns satisfy RIP(s) when n ∼ s log2(ep/s) [Adamczack, Latala,Litvak, Pajor, Tomjack-Jaegermann].

3 Structured matrices : partial Fourier matrices satisfy RIP(s)[Rudelson, Vershynin, Candes, Tao, Bourgain] when n & s log3(p).

6 / 41

Page 28: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

Can we take “Cauchy measurements” (density ∝ (1 + x2)−1) ?

X = (x1, . . . , xp) where xi are ind. Cauchy variables

and still get the same properties as for “Gaussian measurements”?

Here : we will need log p moments for the coordinates xj ’s.

7 / 41

Page 29: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

Can we take “Cauchy measurements” (density ∝ (1 + x2)−1) ?

X = (x1, . . . , xp) where xi are ind. Cauchy variables

and still get the same properties as for “Gaussian measurements”?

Here : we will need log p moments for the coordinates xj ’s.

7 / 41

Page 30: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

Can we take “Cauchy measurements” (density ∝ (1 + x2)−1) ?

X = (x1, . . . , xp) where xi are ind. Cauchy variables

and still get the same properties as for “Gaussian measurements”?

Here : we will need log p moments for the coordinates xj ’s.

7 / 41

Page 31: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

1 Let Γ = n−1/2(eij) ∈ Rn×p where eij are iid symmetric exponential.

If Γ satisfies RIP(s) with probability at least 1/2 then

n & s log2(ep/s

).

[Adamczack, Latala, Litvak, Pajor, Tomjack-Jaegermann]

2 If Γ has independent isotropic sub-exponential rows then it satisfiesER(s) with large probability when

n & s log(ep/s

).

[Koltchinksii] and [Foucart and Lai]

Exact reconstruction under weak concentration property cannot bestudied via RIP

8 / 41

Page 32: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

1 Let Γ = n−1/2(eij) ∈ Rn×p where eij are iid symmetric exponential.

If Γ satisfies RIP(s) with probability at least 1/2 then

n & s log2(ep/s

).

[Adamczack, Latala, Litvak, Pajor, Tomjack-Jaegermann]

2 If Γ has independent isotropic sub-exponential rows then it satisfiesER(s) with large probability when

n & s log(ep/s

).

[Koltchinksii] and [Foucart and Lai]

Exact reconstruction under weak concentration property cannot bestudied via RIP

8 / 41

Page 33: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

1 Let Γ = n−1/2(eij) ∈ Rn×p where eij are iid symmetric exponential.

If Γ satisfies RIP(s) with probability at least 1/2 then

n & s log2(ep/s

).

[Adamczack, Latala, Litvak, Pajor, Tomjack-Jaegermann]

2 If Γ has independent isotropic sub-exponential rows then it satisfiesER(s) with large probability when

n & s log(ep/s

).

[Koltchinksii] and [Foucart and Lai]

Exact reconstruction under weak concentration property cannot bestudied via RIP

8 / 41

Page 34: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

What property of randomness is used for exact reconstruction ? concentration ?

1 Let Γ = n−1/2(eij) ∈ Rn×p where eij are iid symmetric exponential.

If Γ satisfies RIP(s) with probability at least 1/2 then

n & s log2(ep/s

).

[Adamczack, Latala, Litvak, Pajor, Tomjack-Jaegermann]

2 If Γ has independent isotropic sub-exponential rows then it satisfiesER(s) with large probability when

n & s log(ep/s

).

[Koltchinksii] and [Foucart and Lai]

Exact reconstruction under weak concentration property cannot bestudied via RIP

8 / 41

Page 35: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A weaker condition than RIP

Proposition (L. and Mendelson)

Let Γ : Rp 7→ Rn such that

1 for any ‖t‖0 ≤ s : ‖Γt‖2 ≥ κ0‖t‖2,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, ∀t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0 and so ER(s) holds.

9 / 41

Page 36: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A weaker condition than RIP

Proposition (L. and Mendelson)

Let Γ : Rp 7→ Rn such that

1 for any ‖t‖0 ≤ s : ‖Γt‖2 ≥ κ0‖t‖2,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, ∀t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0 and so ER(s) holds.

9 / 41

Page 37: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A weaker condition than RIP

Proposition (L. and Mendelson)

Let Γ : Rp 7→ Rn such that

1 for any ‖t‖0 ≤ s : ‖Γt‖2 ≥ κ0‖t‖2,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, ∀t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0 and so ER(s) holds.

9 / 41

Page 38: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Comparison with RIP

∀‖t‖0 ≤ s,1

2‖t‖2 ≤ ‖Γt‖2 ≤

3

2‖t‖2

1 ∀‖t‖0 ≤ s ‖Γt‖2 ≥ κ0‖t‖2,

2 max1≤j≤p ‖Γej‖2 ≤ c0.

Both implies Exact reconstruction ER(s).

LHS of RIP is implied by the small ball property : “no cost”

RHS of RIP requires deviation (ψ2).

10 / 41

Page 39: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Comparison with RIP

∀‖t‖0 ≤ s,1

2‖t‖2 ≤ ‖Γt‖2 ≤

3

2‖t‖2

1 ∀‖t‖0 ≤ s ‖Γt‖2 ≥ κ0‖t‖2,

2 max1≤j≤p ‖Γej‖2 ≤ c0.

Both implies Exact reconstruction ER(s).

LHS of RIP is implied by the small ball property : “no cost”

RHS of RIP requires deviation (ψ2).

10 / 41

Page 40: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Comparison with RIP

∀‖t‖0 ≤ s,1

2‖t‖2 ≤ ‖Γt‖2 ≤

3

2‖t‖2

1 ∀‖t‖0 ≤ s ‖Γt‖2 ≥ κ0‖t‖2,

2 max1≤j≤p ‖Γej‖2 ≤ c0.

Both implies Exact reconstruction ER(s).

LHS of RIP is implied by the small ball property : “no cost”

RHS of RIP requires deviation (ψ2).

10 / 41

Page 41: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Comparison with RIP

∀‖t‖0 ≤ s,1

2‖t‖2 ≤ ‖Γt‖2 ≤

3

2‖t‖2

1 ∀‖t‖0 ≤ s ‖Γt‖2 ≥ κ0‖t‖2,

2 max1≤j≤p ‖Γej‖2 ≤ c0.

Both implies Exact reconstruction ER(s).

LHS of RIP is implied by the small ball property : “no cost”

RHS of RIP requires deviation (ψ2).

10 / 41

Page 42: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Comparison with RIP

∀‖t‖0 ≤ s,1

2‖t‖2 ≤ ‖Γt‖2 ≤

3

2‖t‖2

1 ∀‖t‖0 ≤ s ‖Γt‖2 ≥ κ0‖t‖2,

2 max1≤j≤p ‖Γej‖2 ≤ c0.

Both implies Exact reconstruction ER(s).

LHS of RIP is implied by the small ball property : “no cost”

RHS of RIP requires deviation (ψ2).

10 / 41

Page 43: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

deviation and moments - the first assumption

1 Z is subgaussian (ψ2) ⇔ ‖Z‖Lq .√

q for all q’s

2 Z is subexponential (ψ1) ⇔ ‖Z‖Lq . q for all q’s

3 Z is ψα ⇔ ‖Z‖Lq . q1/α for all q’s

Here, we assume that the measurement vector X = (x1, . . . , xp) ∈ Rp issuch that : ‖xj‖L2 = 1 and

‖xj‖Lq ≤ κ0qη, for q ∼ log(p)

for some η ≥ 1/2.

11 / 41

Page 44: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

deviation and moments - the first assumption

1 Z is subgaussian (ψ2) ⇔ ‖Z‖Lq .√

q for all q’s

2 Z is subexponential (ψ1) ⇔ ‖Z‖Lq . q for all q’s

3 Z is ψα ⇔ ‖Z‖Lq . q1/α for all q’s

Here, we assume that the measurement vector X = (x1, . . . , xp) ∈ Rp issuch that : ‖xj‖L2 = 1 and

‖xj‖Lq ≤ κ0qη, for q ∼ log(p)

for some η ≥ 1/2.

11 / 41

Page 45: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

deviation and moments - the first assumption

1 Z is subgaussian (ψ2) ⇔ ‖Z‖Lq .√

q for all q’s

2 Z is subexponential (ψ1) ⇔ ‖Z‖Lq . q for all q’s

3 Z is ψα ⇔ ‖Z‖Lq . q1/α for all q’s

Here, we assume that the measurement vector X = (x1, . . . , xp) ∈ Rp issuch that : ‖xj‖L2 = 1 and

‖xj‖Lq ≤ κ0qη, for q ∼ log(p)

for some η ≥ 1/2.

11 / 41

Page 46: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

deviation and moments - the first assumption

1 Z is subgaussian (ψ2) ⇔ ‖Z‖Lq .√

q for all q’s

2 Z is subexponential (ψ1) ⇔ ‖Z‖Lq . q for all q’s

3 Z is ψα ⇔ ‖Z‖Lq . q1/α for all q’s

Here, we assume that the measurement vector X = (x1, . . . , xp) ∈ Rp issuch that : ‖xj‖L2 = 1 and

‖xj‖Lq ≤ κ0qη, for q ∼ log(p)

for some η ≥ 1/2.

11 / 41

Page 47: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - the second assumption. [Mendelson], [Koltchinskii, Mendelson]

There exists two constants u0, β0 such that : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

Examples :

1) if X is isotropic (E⟨X , t

⟩2= ‖t‖2

2) and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2+ε

≤ κ0‖⟨X , t

⟩‖L2

for some ε > 0.

2) if X is isotropic and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2 ≤ κ0‖

⟨X , t

⟩‖L1 .

12 / 41

Page 48: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - the second assumption. [Mendelson], [Koltchinskii, Mendelson]

There exists two constants u0, β0 such that : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

Examples :

1) if X is isotropic (E⟨X , t

⟩2= ‖t‖2

2) and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2+ε

≤ κ0‖⟨X , t

⟩‖L2

for some ε > 0.

2) if X is isotropic and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2 ≤ κ0‖

⟨X , t

⟩‖L1 .

12 / 41

Page 49: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - the second assumption. [Mendelson], [Koltchinskii, Mendelson]

There exists two constants u0, β0 such that : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

Examples :

1) if X is isotropic (E⟨X , t

⟩2= ‖t‖2

2) and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2+ε

≤ κ0‖⟨X , t

⟩‖L2

for some ε > 0.

2) if X is isotropic and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2 ≤ κ0‖

⟨X , t

⟩‖L1 .

12 / 41

Page 50: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - the second assumption. [Mendelson], [Koltchinskii, Mendelson]

There exists two constants u0, β0 such that : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

Examples :

1) if X is isotropic (E⟨X , t

⟩2= ‖t‖2

2) and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2+ε

≤ κ0‖⟨X , t

⟩‖L2

for some ε > 0.

2) if X is isotropic and for all ‖t‖0 ≤ s,

‖⟨X , t

⟩‖L2 ≤ κ0‖

⟨X , t

⟩‖L1 .

12 / 41

Page 51: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - other examples from [RV13]

[RV13] : “Small ball probabilities for linear images of high dimensionaldistributions” M. Rudelson and R. Vershynin.

3) X = (x1, . . . , xp) with independent absolutly continous coordinateswith density bounded by K a.s. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ (4

√2K )−1‖t‖2

]≥ 1

2.

(For example, a Cauchy measurement vector satisfies the small ballproperty).

4) X = (x1, . . . , xp) with independent coordinates such that

L(xi , t0) := supu∈R

P[|xi − u| ≤ t0] ≤ p0

for some t0, p0 > 0. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ t0‖t‖2

]≥ 1− c0p0.

13 / 41

Page 52: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - other examples from [RV13]

[RV13] : “Small ball probabilities for linear images of high dimensionaldistributions” M. Rudelson and R. Vershynin.

3) X = (x1, . . . , xp) with independent absolutly continous coordinateswith density bounded by K a.s. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ (4

√2K )−1‖t‖2

]≥ 1

2.

(For example, a Cauchy measurement vector satisfies the small ballproperty).

4) X = (x1, . . . , xp) with independent coordinates such that

L(xi , t0) := supu∈R

P[|xi − u| ≤ t0] ≤ p0

for some t0, p0 > 0. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ t0‖t‖2

]≥ 1− c0p0.

13 / 41

Page 53: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - other examples from [RV13]

[RV13] : “Small ball probabilities for linear images of high dimensionaldistributions” M. Rudelson and R. Vershynin.

3) X = (x1, . . . , xp) with independent absolutly continous coordinateswith density bounded by K a.s. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ (4

√2K )−1‖t‖2

]≥ 1

2.

(For example, a Cauchy measurement vector satisfies the small ballproperty).

4) X = (x1, . . . , xp) with independent coordinates such that

L(xi , t0) := supu∈R

P[|xi − u| ≤ t0] ≤ p0

for some t0, p0 > 0.

then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ t0‖t‖2

]≥ 1− c0p0.

13 / 41

Page 54: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Small ball property - other examples from [RV13]

[RV13] : “Small ball probabilities for linear images of high dimensionaldistributions” M. Rudelson and R. Vershynin.

3) X = (x1, . . . , xp) with independent absolutly continous coordinateswith density bounded by K a.s. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ (4

√2K )−1‖t‖2

]≥ 1

2.

(For example, a Cauchy measurement vector satisfies the small ballproperty).

4) X = (x1, . . . , xp) with independent coordinates such that

L(xi , t0) := supu∈R

P[|xi − u| ≤ t0] ≤ p0

for some t0, p0 > 0. then for any t ∈ Rp,

P[∣∣⟨X , t⟩∣∣ ≥ t0‖t‖2

]≥ 1− c0p0.

13 / 41

Page 55: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)> random variables in Rp s.t. :

1 ‖xj‖L2 = 1 and for some η ≥ 1/2 and q = κ1 log(wp) :

‖xj‖Lq ≤ κ0qη,

2 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies ER(c2u20β0s).

14 / 41

Page 56: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)> random variables in Rp s.t. :

1 ‖xj‖L2 = 1 and for some η ≥ 1/2 and q = κ1 log(wp) :

‖xj‖Lq ≤ κ0qη,

2 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies ER(c2u20β0s).

14 / 41

Page 57: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)> random variables in Rp s.t. :

1 ‖xj‖L2 = 1 and for some η ≥ 1/2 and q = κ1 log(wp) :

‖xj‖Lq ≤ κ0qη,

2 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies ER(c2u20β0s).

14 / 41

Page 58: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)> random variables in Rp s.t. :

1 ‖xj‖L2 = 1 and for some η ≥ 1/2 and q = κ1 log(wp) :

‖xj‖Lq ≤ κ0qη,

2 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies ER(c2u20β0s).

14 / 41

Page 59: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)> random variables in Rp s.t. :

1 ‖xj‖L2 = 1 and for some η ≥ 1/2 and q = κ1 log(wp) :

‖xj‖Lq ≤ κ0qη,

2 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies ER(c2u20β0s).

14 / 41

Page 60: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

log p moments are almost necessary

Theorem (L. & Mendelson)

There exists a real-valued random variable x such that

1 Ex = 0, Ex2 = 1, Ex4 . 1

2 ‖x‖Lq .√

q for q ∼ (log p)/ log log p

for which if xij are iid∼ x, for n ∼ log p then

Γ = n−1/2(xij : 1 ≤ i ≤ n, 1 ≤ j ≤ p

),

Γ does not satisfy the ER(1) with probability at least 1/2.

⇒ We need at least log p/ log log p moments for exact reconstruction viabasis pursuit.

15 / 41

Page 61: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

log p moments are almost necessary

Theorem (L. & Mendelson)

There exists a real-valued random variable x such that

1 Ex = 0, Ex2 = 1, Ex4 . 1

2 ‖x‖Lq .√

q for q ∼ (log p)/ log log p

for which if xij are iid∼ x, for n ∼ log p then

Γ = n−1/2(xij : 1 ≤ i ≤ n, 1 ≤ j ≤ p

),

Γ does not satisfy the ER(1) with probability at least 1/2.

⇒ We need at least log p/ log log p moments for exact reconstruction viabasis pursuit.

15 / 41

Page 62: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

log p moments are almost necessary

Theorem (L. & Mendelson)

There exists a real-valued random variable x such that

1 Ex = 0, Ex2 = 1, Ex4 . 1

2 ‖x‖Lq .√

q for q ∼ (log p)/ log log p

for which if xij are iid∼ x, for n ∼ log p then

Γ = n−1/2(xij : 1 ≤ i ≤ n, 1 ≤ j ≤ p

),

Γ does not satisfy the ER(1) with probability at least 1/2.

⇒ We need at least log p/ log log p moments for exact reconstruction viabasis pursuit.

15 / 41

Page 63: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

log p moments are almost necessary

Theorem (L. & Mendelson)

There exists a real-valued random variable x such that

1 Ex = 0, Ex2 = 1, Ex4 . 1

2 ‖x‖Lq .√

q for q ∼ (log p)/ log log p

for which if xij are iid∼ x, for n ∼ log p then

Γ = n−1/2(xij : 1 ≤ i ≤ n, 1 ≤ j ≤ p

),

Γ does not satisfy the ER(1) with probability at least 1/2.

⇒ We need at least log p/ log log p moments for exact reconstruction viabasis pursuit.

15 / 41

Page 64: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Exponential and Student variables.

ψγ variables sign(g)|g |2/γ whereg ∼ N (0, 1)

Student variables of degree kdensity ∼ (1 + t2)−(k+1)/2

16 / 41

Page 65: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A price to pay for convex relaxation

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X random variables in Rp s.t. :

1 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

2 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20),

Γ =1√n

X>1...

X>n

.

is such that for any ‖x‖0 ≤ s, argmin(‖t‖0 : Γt = Γx

)= {x}.

⇒ We don’t need moment assumption for `0 −minimization. This provesthat there is a price to pay in terms of concentration for convexrelaxation.

17 / 41

Page 66: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A price to pay for convex relaxation

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X random variables in Rp s.t. :

1 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

2 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20),

Γ =1√n

X>1...

X>n

.

is such that for any ‖x‖0 ≤ s, argmin(‖t‖0 : Γt = Γx

)= {x}.

⇒ We don’t need moment assumption for `0 −minimization. This provesthat there is a price to pay in terms of concentration for convexrelaxation.

17 / 41

Page 67: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A price to pay for convex relaxation

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X random variables in Rp s.t. :

1 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

2 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20),

Γ =1√n

X>1...

X>n

.

is such that for any ‖x‖0 ≤ s, argmin(‖t‖0 : Γt = Γx

)= {x}.

⇒ We don’t need moment assumption for `0 −minimization. This provesthat there is a price to pay in terms of concentration for convexrelaxation.

17 / 41

Page 68: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A price to pay for convex relaxation

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X random variables in Rp s.t. :

1 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

2 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20),

Γ =1√n

X>1...

X>n

.

is such that for any ‖x‖0 ≤ s, argmin(‖t‖0 : Γt = Γx

)= {x}.

⇒ We don’t need moment assumption for `0 −minimization. This provesthat there is a price to pay in terms of concentration for convexrelaxation.

17 / 41

Page 69: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

A price to pay for convex relaxation

Theorem (L. & Mendelson)

Let X1, . . . ,Xn be n iid ∼ X random variables in Rp s.t. :

1 there exists u0, β0 such that : ∀t, ‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

2 n & s log(ep/s).

Then, with probability at least 1− 2 exp(−c1nβ20),

Γ =1√n

X>1...

X>n

.

is such that for any ‖x‖0 ≤ s, argmin(‖t‖0 : Γt = Γx

)= {x}.

⇒ We don’t need moment assumption for `0 −minimization. This provesthat there is a price to pay in terms of concentration for convexrelaxation.

17 / 41

Page 70: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

conclusion and comments for the exact reconstruction problem

1 Exact reconstruction via Basis Pursuit for random linearmeasurements under log p moments is possible with the samenumber of measurements as in the Gaussian case.

2 RIP needs ψ2-concentration (and thus may not be the “optimal” wayto prove exact reconstruction)

3 the property of randomness that looks “important” for exactreconstruction of s-sparse vectors : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0.

4 log p/ log log p moments is a necessary price to pay for convexrelaxation.

18 / 41

Page 71: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

conclusion and comments for the exact reconstruction problem

1 Exact reconstruction via Basis Pursuit for random linearmeasurements under log p moments is possible with the samenumber of measurements as in the Gaussian case.

2 RIP needs ψ2-concentration (and thus may not be the “optimal” wayto prove exact reconstruction)

3 the property of randomness that looks “important” for exactreconstruction of s-sparse vectors : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0.

4 log p/ log log p moments is a necessary price to pay for convexrelaxation.

18 / 41

Page 72: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

conclusion and comments for the exact reconstruction problem

1 Exact reconstruction via Basis Pursuit for random linearmeasurements under log p moments is possible with the samenumber of measurements as in the Gaussian case.

2 RIP needs ψ2-concentration (and thus may not be the “optimal” wayto prove exact reconstruction)

3 the property of randomness that looks “important” for exactreconstruction of s-sparse vectors : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0.

4 log p/ log log p moments is a necessary price to pay for convexrelaxation.

18 / 41

Page 73: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

conclusion and comments for the exact reconstruction problem

1 Exact reconstruction via Basis Pursuit for random linearmeasurements under log p moments is possible with the samenumber of measurements as in the Gaussian case.

2 RIP needs ψ2-concentration (and thus may not be the “optimal” wayto prove exact reconstruction)

3 the property of randomness that looks “important” for exactreconstruction of s-sparse vectors : ∀‖t‖0 ≤ s,

P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0.

4 log p/ log log p moments is a necessary price to pay for convexrelaxation.

18 / 41

Page 74: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Data : y = Qθ(Γx) where Qθ : Rm → (θZ + θ/2)m.

”Model” : y = Γx + e where ‖e‖∞ ≤ θ/2Procedure : BPDN∞

mint∈Rp‖t‖1 s.t.‖y − Γt‖∞ ≤ θ/2

Results for BPDNq from Jacques, Hammond, and Fadili, Dequantizingcompressed sensing : when oversampling and non-Gaussian constraintscombine, when q →∞ shows that if

n &(

s log(ep/s))q/2

then (RIP)q,2 holds with large probability for Gaussian measurementsmatrices Γ and then for any x

‖xBPDNq − x‖2 .σs(x)1√

s+

θ√q + 1

19 / 41

Page 75: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Data : y = Qθ(Γx) where Qθ : Rm → (θZ + θ/2)m.”Model” : y = Γx + e where ‖e‖∞ ≤ θ/2

Procedure : BPDN∞

mint∈Rp‖t‖1 s.t.‖y − Γt‖∞ ≤ θ/2

Results for BPDNq from Jacques, Hammond, and Fadili, Dequantizingcompressed sensing : when oversampling and non-Gaussian constraintscombine, when q →∞ shows that if

n &(

s log(ep/s))q/2

then (RIP)q,2 holds with large probability for Gaussian measurementsmatrices Γ and then for any x

‖xBPDNq − x‖2 .σs(x)1√

s+

θ√q + 1

19 / 41

Page 76: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Data : y = Qθ(Γx) where Qθ : Rm → (θZ + θ/2)m.”Model” : y = Γx + e where ‖e‖∞ ≤ θ/2Procedure : BPDN∞

mint∈Rp‖t‖1 s.t.‖y − Γt‖∞ ≤ θ/2

Results for BPDNq from Jacques, Hammond, and Fadili, Dequantizingcompressed sensing : when oversampling and non-Gaussian constraintscombine, when q →∞ shows that if

n &(

s log(ep/s))q/2

then (RIP)q,2 holds with large probability for Gaussian measurementsmatrices Γ and then for any x

‖xBPDNq − x‖2 .σs(x)1√

s+

θ√q + 1

19 / 41

Page 77: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Data : y = Qθ(Γx) where Qθ : Rm → (θZ + θ/2)m.”Model” : y = Γx + e where ‖e‖∞ ≤ θ/2Procedure : BPDN∞

mint∈Rp‖t‖1 s.t.‖y − Γt‖∞ ≤ θ/2

Results for BPDNq from Jacques, Hammond, and Fadili, Dequantizingcompressed sensing : when oversampling and non-Gaussian constraintscombine, when q →∞ shows that if

n &(

s log(ep/s))q/2

then (RIP)q,2 holds with large probability for Gaussian measurementsmatrices Γ and then for any x

‖xBPDNq − x‖2 .σs(x)1√

s+

θ√q + 1

19 / 41

Page 78: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Theorem (Dirksen, L. and Rauhut)

For Gaussian measurements, when n & s log(ep/s) then

‖xBPDN∞ − x‖2 .σs(x)1√

s+ θ.

Moreover, xBPDN∞ is quantized consistent : y = Qθ(ΓxBPDN∞).

For the quantization problem RIPq,2 requires more measurements.For analysis based on RIPq,r : for any x ∈ Σs

c‖x‖r ≤ ‖Γx‖q ≤ C‖x‖rtwo phenomena occur :

1 more measurements than s log(ep/s)2 other type of matrices than Gaussian (adjacency matrices, stable

processes).

Bypassing the RIP based approach show that none of these twophenomena actually occur : one can use s log(ep/s) Gaussian measures.

20 / 41

Page 79: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Theorem (Dirksen, L. and Rauhut)

For Gaussian measurements, when n & s log(ep/s) then

‖xBPDN∞ − x‖2 .σs(x)1√

s+ θ.

Moreover, xBPDN∞ is quantized consistent : y = Qθ(ΓxBPDN∞).

For the quantization problem RIPq,2 requires more measurements.

For analysis based on RIPq,r : for any x ∈ Σs

c‖x‖r ≤ ‖Γx‖q ≤ C‖x‖rtwo phenomena occur :

1 more measurements than s log(ep/s)2 other type of matrices than Gaussian (adjacency matrices, stable

processes).

Bypassing the RIP based approach show that none of these twophenomena actually occur : one can use s log(ep/s) Gaussian measures.

20 / 41

Page 80: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Application to quantized CS

Theorem (Dirksen, L. and Rauhut)

For Gaussian measurements, when n & s log(ep/s) then

‖xBPDN∞ − x‖2 .σs(x)1√

s+ θ.

Moreover, xBPDN∞ is quantized consistent : y = Qθ(ΓxBPDN∞).

For the quantization problem RIPq,2 requires more measurements.For analysis based on RIPq,r : for any x ∈ Σs

c‖x‖r ≤ ‖Γx‖q ≤ C‖x‖rtwo phenomena occur :

1 more measurements than s log(ep/s)2 other type of matrices than Gaussian (adjacency matrices, stable

processes).

Bypassing the RIP based approach show that none of these twophenomena actually occur : one can use s log(ep/s) Gaussian measures.

20 / 41

Page 81: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Thanks for your attention

G. Lecue and S. Mendelson, Sparse recovery under weak momentassumption. To appear in Journal of the European Mathematical society,Jan. 2014.

S. Dirksen, G. Lecue and H. Rauhut, On the gap between restrictedisometry properties and sparse recovery conditions. To appear in IEEETransactions on Information Theory, March 2015.

21 / 41

Page 82: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Sparse Linear Regression

22 / 41

Page 83: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Noisy data - LASSO

Data : y = Xβ∗ + σW where W ∼ Np(0, In) and

X =

X>1...

X>n

( =√

nΓ)

Aims : Estimation of β∗ / denoising of Xβ∗ / prediction of outputs /support recovery.

LASSO :

β ∈ argminx∈Rp

(1

n‖y −Xβ‖2

2 + λ‖β‖n,1)

for λ ∼ σ√

log p

n

where

‖β‖n,1 =

p∑j=1

rn,j |βj | and rn,j =(X>X

n

)jj.

23 / 41

Page 84: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Noisy data - LASSO

Data : y = Xβ∗ + σW where W ∼ Np(0, In) and

X =

X>1...

X>n

( =√

nΓ)

Aims : Estimation of β∗ / denoising of Xβ∗ / prediction of outputs /support recovery.

LASSO :

β ∈ argminx∈Rp

(1

n‖y −Xβ‖2

2 + λ‖β‖n,1)

for λ ∼ σ√

log p

n

where

‖β‖n,1 =

p∑j=1

rn,j |βj | and rn,j =(X>X

n

)jj.

23 / 41

Page 85: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Noisy data - LASSO

Data : y = Xβ∗ + σW where W ∼ Np(0, In) and

X =

X>1...

X>n

( =√

nΓ)

Aims : Estimation of β∗ / denoising of Xβ∗ / prediction of outputs /support recovery.

LASSO :

β ∈ argminx∈Rp

(1

n‖y −Xβ‖2

2 + λ‖β‖n,1)

for λ ∼ σ√

log p

n

where

‖β‖n,1 =

p∑j=1

rn,j |βj | and rn,j =(X>X

n

)jj.

23 / 41

Page 86: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Noisy data - LASSO

Data : y = Xβ∗ + σW where W ∼ Np(0, In) and

X =

X>1...

X>n

( =√

nΓ)

Aims : Estimation of β∗ / denoising of Xβ∗ / prediction of outputs /support recovery.

LASSO :

β ∈ argminx∈Rp

(1

n‖y −Xβ‖2

2 + λ‖β‖n,1)

for λ ∼ σ√

log p

n

where

‖β‖n,1 =

p∑j=1

rn,j |βj | and rn,j =(X>X

n

)jj.

23 / 41

Page 87: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Restricted eigenvalue condition - [Bickel, Ritov, Tsybakov, 2007]

Restricted eigenvalue condition : For any I ⊂ [p] s.t. |I | ≤ s, v ∈ Rp

‖vI c‖1 ≤ 3‖vI‖1 ⇒ ‖Γv‖2 ≥ κ(s)‖vI‖2 (REC (s))

Remark : (Null space property) ∀I ⊂ [p] s.t. |I | ≤ s, v ∈ Rp

‖vI c‖1 < ‖vI‖1 ⇒ ‖Γv‖2 > 0 (NSP(s))

24 / 41

Page 88: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Restricted eigenvalue condition - [Bickel, Ritov, Tsybakov, 2007]

Restricted eigenvalue condition : For any I ⊂ [p] s.t. |I | ≤ s, v ∈ Rp

‖vI c‖1 ≤ 3‖vI‖1 ⇒ ‖Γv‖2 ≥ κ(s)‖vI‖2 (REC (s))

Remark : (Null space property) ∀I ⊂ [p] s.t. |I | ≤ s, v ∈ Rp

‖vI c‖1 < ‖vI‖1 ⇒ ‖Γv‖2 > 0 (NSP(s))

24 / 41

Page 89: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Estimation under REC via the LASSO - [BRT,07]

If REC(s) holds and ‖β∗‖0 = s

then with probability larger than1− 1/p�,

‖β − β∗‖1 .sλ

κ(s).

and

‖X (β − β∗)‖2 .σ2s log p

κ2(s).

25 / 41

Page 90: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Estimation under REC via the LASSO - [BRT,07]

If REC(s) holds and ‖β∗‖0 = s then with probability larger than1− 1/p�,

‖β − β∗‖1 .sλ

κ(s).

and

‖X (β − β∗)‖2 .σ2s log p

κ2(s).

25 / 41

Page 91: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Estimation under REC via the LASSO - [BRT,07]

If REC(s) holds and ‖β∗‖0 = s then with probability larger than1− 1/p�,

‖β − β∗‖1 .sλ

κ(s).

and

‖X (β − β∗)‖2 .σ2s log p

κ2(s).

25 / 41

Page 92: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 93: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 94: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 95: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 96: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 97: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 98: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

REC under weak moment assumption

L. & Mendelson

X1, . . . ,Xn be n iid ∼ X = (x1, . . . , xp)>

1 ‖xj‖L2 = 1 and ‖xj‖Lq ≤ κ0qη for some q = κ1 log(wp).

2 ∃u0, β0 such that : ∀‖t‖0 ≤ s,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

3 n & max(s log(ep/s), log(2η−1)∨1(wp)

).

Then, with probability at least 1− 2 exp(−c1nβ20)− 1/(wκ1 pκ1−1),

Γ =1√n

X>1...

X>n

satisfies REC (c1s).

1 log p moments is almost necessary

2 the same is true for the Compatibility Condition of S. van de Geer

3 the same is true for normalized measurement matrices.

26 / 41

Page 99: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Exact cover by 3-sets problem

Problem

Given a collection {Cj : j ∈ [p]} of 3-element subsets of [n], does thereexists a partition of [n] by elements Cj ?

(This problem is NP-complete = NP and NP-hard)

27 / 41

Page 100: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

recasting basis pursuit to a linear program

Basis pursuit

x? ↪→ minimize t∈Rp‖t‖1 subject to Γt = Γx .

is equivalent to the

linear program

((z+)?, (z−)?

)↪→minimize z+,z−∈Rp

N∑j=1

(z+j + z−j )

subject to[Γ| − Γ

] [ z+

z−

]= Γx ,

[z+

z−

]≥ 0.

x? = (z+)? − (z−)?

28 / 41

Page 101: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

s-neighborly polytope

Definition

A centrally symmetric polytope P ⊂ Rn is said s-neighborly if every set ofs vertices, containing no antipodal pair, is the set of all vertices of somefaces of P.

Example : ΓBp1 is s-neighborly when : ∀I ⊂ [p], |I | ≤ s, (εi )i∈I ∈ {±1}I ,

aff({εiXi : i ∈ I}) ∩ conv({θjXj , j /∈ I , θj ∈ {±1}}) = ∅

29 / 41

Page 102: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Paley-Zygmund and Einmahl-Masson inequalities

1 Paley-Zygmund : if ‖Z‖2+ε ≤ κ‖Z‖2,

P[|Z | ≥ (1/2)‖Z‖2

]≥[ 3‖Z‖2

2

4‖Z‖22+ε

] 2+εε ≥

[ 3

] 2+εε

.

2 Einmahl-Mason : if Z ≥ 0 then for t > 0,

P[Z ≤ EZ − t‖Z‖2] ≤ exp(−ct2).

So if ‖Z‖2 ≤ κ‖Z‖1,

P[|Z | ≥ (1− t)‖Z‖2] ≥ 1− exp(−ct2).

30 / 41

Page 103: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Classical “small ball problem” - [Kuelbs, Li]

Study the small ball probability function

φ(ε) = P[‖X‖2 ≤ ε] when ε→ 0.

31 / 41

Page 104: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Construction of deterministic matrices satisfying RIP

[Kashin, 1975], [Alon, Goldreich, Hastad, Peralta, 1992], [Devore,2007], [Nelson, Temlyakov, 2010] n & s2

[Bourgain, Dilworth, Ford, Konyagin, Kutzarova, 2011] n & s2−ε0

Still far from the number of mesurements that can be obtained byrandom matrices : n & s log(ep/s).

32 / 41

Page 105: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Construction of deterministic matrices satisfying RIP

[Kashin, 1975], [Alon, Goldreich, Hastad, Peralta, 1992], [Devore,2007], [Nelson, Temlyakov, 2010] n & s2

[Bourgain, Dilworth, Ford, Konyagin, Kutzarova, 2011] n & s2−ε0

Still far from the number of mesurements that can be obtained byrandom matrices : n & s log(ep/s).

32 / 41

Page 106: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Construction of deterministic matrices satisfying RIP

[Kashin, 1975], [Alon, Goldreich, Hastad, Peralta, 1992], [Devore,2007], [Nelson, Temlyakov, 2010] n & s2

[Bourgain, Dilworth, Ford, Konyagin, Kutzarova, 2011] n & s2−ε0

Still far from the number of mesurements that can be obtained byrandom matrices : n & s log(ep/s).

32 / 41

Page 107: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

The price to pay from `0 to `1

`0-minimization is NP-hard and BP is solved by linear programing but...

more measurements : n ≥ 2s (for `0) and n & s log(ep/s) (for `1).

deterministic measurements for `0 (the first 2s discrete Fouriermeasurements) to random measurements for `1.

33 / 41

Page 108: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

The price to pay from `0 to `1

`0-minimization is NP-hard and BP is solved by linear programing but...

more measurements : n ≥ 2s (for `0) and n & s log(ep/s) (for `1).

deterministic measurements for `0 (the first 2s discrete Fouriermeasurements) to random measurements for `1.

33 / 41

Page 109: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

The price to pay from `0 to `1

`0-minimization is NP-hard and BP is solved by linear programing but...

more measurements : n ≥ 2s (for `0) and n & s log(ep/s) (for `1).

deterministic measurements for `0 (the first 2s discrete Fouriermeasurements) to random measurements for `1.

33 / 41

Page 110: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 1/4

We prove : for any t ∈√

sBp1 ∩ Sp−1

2 ,

‖Γt‖22 =

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

√sBp

1

Sp−12

{t ∈ Sp−12 : ‖t‖0 ≤ s} = Σs

34 / 41

Page 111: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 1/4

We prove : for any t ∈√

sBp1 ∩ Sp−1

2 ,

‖Γt‖22 =

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

√sBp

1

Sp−12

{t ∈ Sp−12 : ‖t‖0 ≤ s} = Σs

34 / 41

Page 112: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 1/4

We prove : for any t ∈√

sBp1 ∩ Sp−1

2 ,

‖Γt‖22 =

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

√sBp

1

Sp−12

{t ∈ Sp−12 : ‖t‖0 ≤ s} = Σs

34 / 41

Page 113: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 1/4

We prove : for any t ∈√

sBp1 ∩ Sp−1

2 ,

‖Γt‖22 =

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

√sBp

1

Sp−12

{t ∈ Sp−12 : ‖t‖0 ≤ s} = Σs

34 / 41

Page 114: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 2/4 – two steps

1 for any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

(small ball property)

2 s-sparse vectors to√

sBp1 ∩ Sp−1 via Maurey’s representation :

write x ∈√

sBp1 ∩ Sp−1

2 as a mean of s-sparse vectors

(log(p) moments)

35 / 41

Page 115: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 2/4 – two steps

1 for any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

(small ball property)

2 s-sparse vectors to√

sBp1 ∩ Sp−1 via Maurey’s representation :

write x ∈√

sBp1 ∩ Sp−1

2 as a mean of s-sparse vectors

(log(p) moments)

35 / 41

Page 116: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 2/4 – two steps

1 for any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

(small ball property)

2 s-sparse vectors to√

sBp1 ∩ Sp−1 via Maurey’s representation :

write x ∈√

sBp1 ∩ Sp−1

2 as a mean of s-sparse vectors

(log(p) moments)

35 / 41

Page 117: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 2/4 – two steps

1 for any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0.

(small ball property)

2 s-sparse vectors to√

sBp1 ∩ Sp−1 via Maurey’s representation :

write x ∈√

sBp1 ∩ Sp−1

2 as a mean of s-sparse vectors

(log(p) moments)

35 / 41

Page 118: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – lower bound on the smallest singular value

small ball assumption :∀t ∈ Σs ,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

empirical small ball property : w.h.p.∀t ∈ Σs ,∣∣{i ∈ {1, . . . , n}, |⟨Xi , t⟩| ≥ u0‖t‖2}

∣∣ ≥ β0n

2.

when n & s log(ep/s). For any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ 1

n

n∑i=1

u20‖t‖2

2I(|⟨Xi , t

⟩| ≥ u0‖t‖2}

)≥ u2

0‖t‖22

β0

2.

[Mendelson, Koltchinskii] under moment assumptions.

36 / 41

Page 119: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – lower bound on the smallest singular value

small ball assumption :∀t ∈ Σs ,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

empirical small ball property : w.h.p.∀t ∈ Σs ,∣∣{i ∈ {1, . . . , n}, |⟨Xi , t⟩| ≥ u0‖t‖2}

∣∣ ≥ β0n

2.

when n & s log(ep/s). For any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ 1

n

n∑i=1

u20‖t‖2

2I(|⟨Xi , t

⟩| ≥ u0‖t‖2}

)≥ u2

0‖t‖22

β0

2.

[Mendelson, Koltchinskii] under moment assumptions.

36 / 41

Page 120: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – lower bound on the smallest singular value

small ball assumption :∀t ∈ Σs ,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

empirical small ball property : w.h.p.∀t ∈ Σs ,∣∣{i ∈ {1, . . . , n}, |⟨Xi , t⟩| ≥ u0‖t‖2}

∣∣ ≥ β0n

2.

when n & s log(ep/s).

For any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ 1

n

n∑i=1

u20‖t‖2

2I(|⟨Xi , t

⟩| ≥ u0‖t‖2}

)≥ u2

0‖t‖22

β0

2.

[Mendelson, Koltchinskii] under moment assumptions.

36 / 41

Page 121: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – lower bound on the smallest singular value

small ball assumption :∀t ∈ Σs ,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

empirical small ball property : w.h.p.∀t ∈ Σs ,∣∣{i ∈ {1, . . . , n}, |⟨Xi , t⟩| ≥ u0‖t‖2}

∣∣ ≥ β0n

2.

when n & s log(ep/s). For any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ 1

n

n∑i=1

u20‖t‖2

2I(|⟨Xi , t

⟩| ≥ u0‖t‖2}

)≥ u2

0‖t‖22

β0

2.

[Mendelson, Koltchinskii] under moment assumptions.

36 / 41

Page 122: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – lower bound on the smallest singular value

small ball assumption :∀t ∈ Σs ,P[|⟨X , t

⟩| ≥ u0‖t‖2

]≥ β0

empirical small ball property : w.h.p.∀t ∈ Σs ,∣∣{i ∈ {1, . . . , n}, |⟨Xi , t⟩| ≥ u0‖t‖2}

∣∣ ≥ β0n

2.

when n & s log(ep/s). For any t ∈ Σs :

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ 1

n

n∑i=1

u20‖t‖2

2I(|⟨Xi , t

⟩| ≥ u0‖t‖2}

)≥ u2

0‖t‖22

β0

2.

[Mendelson, Koltchinskii] under moment assumptions.

36 / 41

Page 123: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – s-sparse vectors to√

sBp1 ∩ Sp−1

2 via Maurey’s method

Proposition

Let Γ : Rp 7→ Rn such that

1 for any t ∈ Σs ∩ Sp−12 : ‖Γt‖2 ≥ κ0,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, for any t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0.

The uniform control max1≤j≤p ‖Γej‖2 ≤ c0 costs log(p) moments.

37 / 41

Page 124: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – s-sparse vectors to√

sBp1 ∩ Sp−1

2 via Maurey’s method

Proposition

Let Γ : Rp 7→ Rn such that

1 for any t ∈ Σs ∩ Sp−12 : ‖Γt‖2 ≥ κ0,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, for any t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0.

The uniform control max1≤j≤p ‖Γej‖2 ≤ c0 costs log(p) moments.

37 / 41

Page 125: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – s-sparse vectors to√

sBp1 ∩ Sp−1

2 via Maurey’s method

Proposition

Let Γ : Rp 7→ Rn such that

1 for any t ∈ Σs ∩ Sp−12 : ‖Γt‖2 ≥ κ0,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, for any t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0.

The uniform control max1≤j≤p ‖Γej‖2 ≤ c0 costs log(p) moments.

37 / 41

Page 126: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Proof 3/4 – s-sparse vectors to√

sBp1 ∩ Sp−1

2 via Maurey’s method

Proposition

Let Γ : Rp 7→ Rn such that

1 for any t ∈ Σs ∩ Sp−12 : ‖Γt‖2 ≥ κ0,

2 ‖Γej‖2 ≤ c0,∀1 ≤ j ≤ p (where (e1, . . . , ep) is the canonical basis)

Then, for any t ∈ √c1sBp1 ∩ Sp−1

2 , ‖Γt‖2 ≥ c2 > 0.

The uniform control max1≤j≤p ‖Γej‖2 ≤ c0 costs log(p) moments.

37 / 41

Page 127: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity

? Construct 20 s-sparse vectorsx ∈ R200.? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 128: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity? Construct 20 s-sparse vectorsx ∈ R200.

? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 129: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity? Construct 20 s-sparse vectorsx ∈ R200.? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 130: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity? Construct 20 s-sparse vectorsx ∈ R200.? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 131: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity? Construct 20 s-sparse vectorsx ∈ R200.? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 132: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity? Construct 20 s-sparse vectorsx ∈ R200.? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 133: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian measurements.

For every (n, s),n : number of measurementss : sparsity? Construct 20 s-sparse vectorsx ∈ R200.? Run Basis Pursuit xBP using⟨Xi , x

⟩, i = 1, . . . , n

? Check if ‖x − xBP‖2 ≤ 0.01.

1 black pixel = 20 “exact”recovery (0 mistakes)

2 red pixel = 0 exact recovery(20 mistakes).

Theoretical phase transition n ∼ s log(ep/s).

38 / 41

Page 134: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian and Cauchy measurements.

Gaussian measurements Cauchy measurements

log(ep/s) moments may be necessary ( ?)

39 / 41

Page 135: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

phase transition diagram for Gaussian and Cauchy measurements.

Gaussian measurements Cauchy measurements

log(ep/s) moments may be necessary ( ?)

39 / 41

Page 136: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

Smallest singular value of a random matrix

40 / 41

Page 137: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

proportional case (s = n & p) = lower bound on the smallest singular value

X1, . . . ,Xn iid vectors in Rp such that n & p,

we have

inf‖t‖2=1

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0

1 in expectation : [Srivastava, Vershynin, 2012] X is isotropic and

sup‖t‖2=1

E|⟨X , t

⟩|2+ε ≤ c1.

2 with probability larger than 1− exp(−c2n) in [Koltchinksii,Mendelson] when X is isotropic and for every t ∈ Rp,

‖⟨t,X

⟩‖L2 ≤ c3‖

⟨t,X

⟩‖L1 .

3 with probability larger than 1− exp(−c2n) in [L., Mendelson] whenfor every t ∈ Rp,

P[|⟨t,X

⟩| ≥ u0‖t‖2

]≥ β0.

⇒ Lower bound on the smallest singular value has nothing to do withconcentration (true for Cauchy matrices).

41 / 41

Page 138: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

proportional case (s = n & p) = lower bound on the smallest singular value

X1, . . . ,Xn iid vectors in Rp such that n & p, we have

inf‖t‖2=1

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0

1 in expectation : [Srivastava, Vershynin, 2012] X is isotropic and

sup‖t‖2=1

E|⟨X , t

⟩|2+ε ≤ c1.

2 with probability larger than 1− exp(−c2n) in [Koltchinksii,Mendelson] when X is isotropic and for every t ∈ Rp,

‖⟨t,X

⟩‖L2 ≤ c3‖

⟨t,X

⟩‖L1 .

3 with probability larger than 1− exp(−c2n) in [L., Mendelson] whenfor every t ∈ Rp,

P[|⟨t,X

⟩| ≥ u0‖t‖2

]≥ β0.

⇒ Lower bound on the smallest singular value has nothing to do withconcentration (true for Cauchy matrices).

41 / 41

Page 139: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

proportional case (s = n & p) = lower bound on the smallest singular value

X1, . . . ,Xn iid vectors in Rp such that n & p, we have

inf‖t‖2=1

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0

1 in expectation : [Srivastava, Vershynin, 2012] X is isotropic and

sup‖t‖2=1

E|⟨X , t

⟩|2+ε ≤ c1.

2 with probability larger than 1− exp(−c2n) in [Koltchinksii,Mendelson] when X is isotropic and for every t ∈ Rp,

‖⟨t,X

⟩‖L2 ≤ c3‖

⟨t,X

⟩‖L1 .

3 with probability larger than 1− exp(−c2n) in [L., Mendelson] whenfor every t ∈ Rp,

P[|⟨t,X

⟩| ≥ u0‖t‖2

]≥ β0.

⇒ Lower bound on the smallest singular value has nothing to do withconcentration (true for Cauchy matrices).

41 / 41

Page 140: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

proportional case (s = n & p) = lower bound on the smallest singular value

X1, . . . ,Xn iid vectors in Rp such that n & p, we have

inf‖t‖2=1

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0

1 in expectation : [Srivastava, Vershynin, 2012] X is isotropic and

sup‖t‖2=1

E|⟨X , t

⟩|2+ε ≤ c1.

2 with probability larger than 1− exp(−c2n) in [Koltchinksii,Mendelson] when X is isotropic and for every t ∈ Rp,

‖⟨t,X

⟩‖L2 ≤ c3‖

⟨t,X

⟩‖L1 .

3 with probability larger than 1− exp(−c2n) in [L., Mendelson] whenfor every t ∈ Rp,

P[|⟨t,X

⟩| ≥ u0‖t‖2

]≥ β0.

⇒ Lower bound on the smallest singular value has nothing to do withconcentration (true for Cauchy matrices).

41 / 41

Page 141: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

proportional case (s = n & p) = lower bound on the smallest singular value

X1, . . . ,Xn iid vectors in Rp such that n & p, we have

inf‖t‖2=1

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0

1 in expectation : [Srivastava, Vershynin, 2012] X is isotropic and

sup‖t‖2=1

E|⟨X , t

⟩|2+ε ≤ c1.

2 with probability larger than 1− exp(−c2n) in [Koltchinksii,Mendelson] when X is isotropic and for every t ∈ Rp,

‖⟨t,X

⟩‖L2 ≤ c3‖

⟨t,X

⟩‖L1 .

3 with probability larger than 1− exp(−c2n) in [L., Mendelson] whenfor every t ∈ Rp,

P[|⟨t,X

⟩| ≥ u0‖t‖2

]≥ β0.

⇒ Lower bound on the smallest singular value has nothing to do withconcentration (true for Cauchy matrices).

41 / 41

Page 142: mas2016.sciencesconf.org · 2016-10-18 · Random measurements Random matrix So far, only random matrices satisfy ER(s) with n ˘s log ep=s (with large probability). Examples : 1

proportional case (s = n & p) = lower bound on the smallest singular value

X1, . . . ,Xn iid vectors in Rp such that n & p, we have

inf‖t‖2=1

1

n

n∑i=1

⟨Xi , t

⟩2 ≥ c0 > 0

1 in expectation : [Srivastava, Vershynin, 2012] X is isotropic and

sup‖t‖2=1

E|⟨X , t

⟩|2+ε ≤ c1.

2 with probability larger than 1− exp(−c2n) in [Koltchinksii,Mendelson] when X is isotropic and for every t ∈ Rp,

‖⟨t,X

⟩‖L2 ≤ c3‖

⟨t,X

⟩‖L1 .

3 with probability larger than 1− exp(−c2n) in [L., Mendelson] whenfor every t ∈ Rp,

P[|⟨t,X

⟩| ≥ u0‖t‖2

]≥ β0.

⇒ Lower bound on the smallest singular value has nothing to do withconcentration (true for Cauchy matrices).

41 / 41