Top Banner
Interpolation via weighted 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C f¨ ur Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz Zentrum M¨ unchen September 20, 2013 Joint work with Rachel Ward (University of Texas at Austin)
68

Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Jul 22, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Interpolation via weighted `1-minimization

Holger RauhutRWTH Aachen University

Lehrstuhl C fur Mathematik (Analysis)

Mathematical Analysis and ApplicationsWorkshop in honor of Rupert Lasser

Helmholtz Zentrum MunchenSeptember 20, 2013

Joint work with Rachel Ward (University of Texas at Austin)

Page 2: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Function interpolation

AimGiven a function f : D → C on a domain D reconstruct orinterpolate f from sample values f (t1), . . . , f (tm).

Approaches

• (Linear) polynomial interpolation• assumes (classical) smoothness in order to achieve error rates• works with special interpolation points (e.g. Chebyshev points).

• Compressive sensing• reconstruction nonlinear• assumes sparsity (or compressibility) of a series expansion in

terms of a certain basis (e.g. trigonometric bases)• fewer (random!) sampling points than degrees of freedom

This talk: Combine sparsity and smoothness!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 2

Page 3: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Function interpolation

AimGiven a function f : D → C on a domain D reconstruct orinterpolate f from sample values f (t1), . . . , f (tm).

Approaches

• (Linear) polynomial interpolation• assumes (classical) smoothness in order to achieve error rates• works with special interpolation points (e.g. Chebyshev points).

• Compressive sensing• reconstruction nonlinear• assumes sparsity (or compressibility) of a series expansion in

terms of a certain basis (e.g. trigonometric bases)• fewer (random!) sampling points than degrees of freedom

This talk: Combine sparsity and smoothness!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 2

Page 4: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Function interpolation

AimGiven a function f : D → C on a domain D reconstruct orinterpolate f from sample values f (t1), . . . , f (tm).

Approaches

• (Linear) polynomial interpolation• assumes (classical) smoothness in order to achieve error rates• works with special interpolation points (e.g. Chebyshev points).

• Compressive sensing• reconstruction nonlinear• assumes sparsity (or compressibility) of a series expansion in

terms of a certain basis (e.g. trigonometric bases)• fewer (random!) sampling points than degrees of freedom

This talk: Combine sparsity and smoothness!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 2

Page 5: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

A classical interpolation result

C r ([0, 1]d): r -times continuously differentiable periodic functions

Existence of set of sampling points t1, . . . , tm and linearreconstruction operator R : Cm → C r ([0, 1]d) such that for everyf ∈ C r ([0, 1]d) the approximation f = R(f (t1), . . . , f (tm)) satisfiesthe optimal error bound

‖f − f ‖∞ ≤ Cm−r/d‖f ‖C r .

Curse of dimension: Need about m ≥ Cf ε−d/r samples for

achieving error ε < 1.

Exponential scaling in d cannot be avoided using only smoothness(DeVore, Howard, Micchelli 1989 – Novak, Wozniakowski 2009).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 3

Page 6: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

A classical interpolation result

C r ([0, 1]d): r -times continuously differentiable periodic functions

Existence of set of sampling points t1, . . . , tm and linearreconstruction operator R : Cm → C r ([0, 1]d) such that for everyf ∈ C r ([0, 1]d) the approximation f = R(f (t1), . . . , f (tm)) satisfiesthe optimal error bound

‖f − f ‖∞ ≤ Cm−r/d‖f ‖C r .

Curse of dimension: Need about m ≥ Cf ε−d/r samples for

achieving error ε < 1.

Exponential scaling in d cannot be avoided using only smoothness(DeVore, Howard, Micchelli 1989 – Novak, Wozniakowski 2009).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 3

Page 7: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Sparse representation of functions

D: domain endowed with a probability measure νψj : D → C, j ∈ Γ (finite or infinite){ψj}j∈Γ orthonormal system:∫

Dψj(t)ψk(t)dν(t) = δj ,k , j , k ∈ Γ

We consider functions of the form

f (t) =∑j∈Γ

xjψj(t)

f is called s-sparse if ‖x‖0 := {` : x` 6= 0} ≤ s and compressible ifthe error of best s-term approximation error

σs(f )q := σs(x)q := infz:‖z‖0≤s

‖x− z‖q (0 < q ≤ ∞)

is small.Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 4

Page 8: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Fourier Algebra and Compressibility

Fourier algebra Ap = {f ∈ C [0, 1] : ||| f ||| p <∞}, 0 < p ≤ 1,

||| f ||| p:= ‖x‖p = (∑j∈Z|xj |p)1/p, f (t) =

∑j∈Z

xjψj(t).

Motivating example: D = [0, 1], ν Lebesgue measure,

ψj(t) = e2πijt , t ∈ [0, 1]d , j ∈ Zd .

Compressibility via Stechkin estimate

σs(f )q = σs(x)q ≤ s1/q−1/p‖x‖p = s1/q−1/p ||| f ||| p.

Since ‖f ‖∞ := supx∈[0,1] |f (t)| ≤ ||| f ||| 1, the best s-termapproximation f0 =

∑j∈S xjψj , |S | = s, satisfies

‖f − f0‖∞ ≤ s1−1/p ||| f ||| p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 5

Page 9: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Fourier Algebra and Compressibility

Fourier algebra Ap = {f ∈ C [0, 1] : ||| f ||| p <∞}, 0 < p ≤ 1,

||| f ||| p:= ‖x‖p = (∑j∈Z|xj |p)1/p, f (t) =

∑j∈Z

xjψj(t).

Motivating example: D = [0, 1], ν Lebesgue measure,

ψj(t) = e2πijt , t ∈ [0, 1]d , j ∈ Zd .

Compressibility via Stechkin estimate

σs(f )q = σs(x)q ≤ s1/q−1/p‖x‖p = s1/q−1/p ||| f ||| p.

Since ‖f ‖∞ := supx∈[0,1] |f (t)| ≤ ||| f ||| 1, the best s-termapproximation f0 =

∑j∈S xjψj , |S | = s, satisfies

‖f − f0‖∞ ≤ s1−1/p ||| f ||| p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 5

Page 10: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Sampling

Task: Reconstruct sparse or compressible

f (t) =∑j∈Γ

xjψj(t)

from samples f (t1), . . . , f (tm) with given sampling pointst1, . . . , tm ∈ D.

With y` = f (t`), ` = 1, . . . ,m and sampling matrix

A`,j = ψj(t`), ` = 1, . . . ,m; j ∈ Γ

we can writey = Ax

Aim: Minimal number m of required samples, m� N = |Γ|.

Leads to underdetermined linear system.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 6

Page 11: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Sampling

Task: Reconstruct sparse or compressible

f (t) =∑j∈Γ

xjψj(t)

from samples f (t1), . . . , f (tm) with given sampling pointst1, . . . , tm ∈ D.

With y` = f (t`), ` = 1, . . . ,m and sampling matrix

A`,j = ψj(t`), ` = 1, . . . ,m; j ∈ Γ

we can writey = Ax

Aim: Minimal number m of required samples, m� N = |Γ|.

Leads to underdetermined linear system.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 6

Page 12: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Detour: Compressive sensing

Reconstruction of x ∈ CN from y = Ax with A ∈ Cm×N , m� N.

Ingredients:

• Sparsity / Compressibility

• Efficient reconstruction algorithms

• Randomness / Random matrices

Applications in Signal / Image Processing:

Radar, Magnetic Resonance Imaging, Optics, Statistics,Astronomy, ...

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 7

Page 13: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Detour: Compressive sensing

Reconstruction of x ∈ CN from y = Ax with A ∈ Cm×N , m� N.

Ingredients:

• Sparsity / Compressibility

• Efficient reconstruction algorithms

• Randomness / Random matrices

Applications in Signal / Image Processing:

Radar, Magnetic Resonance Imaging, Optics, Statistics,Astronomy, ...

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 7

Page 14: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Reconstruction via `1-minimization

`0-minimization

min ‖z‖0 subject to Az = y

is NP-hard.

Convex relaxation: `1-minimization

min ‖z‖1 subject to Az = y

Version for noisy data:

min ‖z‖1 subject to ‖Az− y‖2 ≤ η.

Alternatives:Orthogonal Matching Pursuit, Iterative Hard Thresholding,CoSaMP, Iteratively Reweighted Least Squares,...

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 8

Page 15: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Reconstruction via `1-minimization

`0-minimization

min ‖z‖0 subject to Az = y

is NP-hard.

Convex relaxation: `1-minimization

min ‖z‖1 subject to Az = y

Version for noisy data:

min ‖z‖1 subject to ‖Az− y‖2 ≤ η.

Alternatives:Orthogonal Matching Pursuit, Iterative Hard Thresholding,CoSaMP, Iteratively Reweighted Least Squares,...

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 8

Page 16: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Restricted Isometry Property (RIP)

Recovery guarantees, Error Estimates?

Definition

The restricted isometry constant δs of a matrix A ∈ Cm×N isdefined as the smallest δs such that

(1− δs)‖x‖22 ≤ ‖Ax‖2

2 ≤ (1 + δs)‖x‖22

for all s-sparse x ∈ CN .

Requires that all s-column submatrices of A are well-conditioned.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 9

Page 17: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Restricted Isometry Property (RIP)

Recovery guarantees, Error Estimates?

Definition

The restricted isometry constant δs of a matrix A ∈ Cm×N isdefined as the smallest δs such that

(1− δs)‖x‖22 ≤ ‖Ax‖2

2 ≤ (1 + δs)‖x‖22

for all s-sparse x ∈ CN .

Requires that all s-column submatrices of A are well-conditioned.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 9

Page 18: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

RIP implies recovery by `1-minimization

Theorem (Candes, Romberg, Tao ’04 – Candes ’08 – Foucart, Lai’09 – Foucart ’09/’12 – Li, Mo ’11 – Andersson, Stromberg ’12)

Assume that the restricted isometry constant of A ∈ Cm×N satisfies

δ2s < 4/√

41 ≈ 0.6246.

Then `1-minimization reconstructs every s-sparse vector x ∈ CN

from y = Ax.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 10

Page 19: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Stability

Theorem (Candes, Romberg, Tao ’04 – Candes ’08 – Foucart, Lai’09 – Foucart ’09/’12 – Li, Mo ’11 – Andersson, Stromberg ’12)

Let A ∈ Cm×N with δ2s < 4/√

41 ≈ 0.6246. Let x ∈ CN , andassume that noisy data are observed, y = Ax + η with ‖η‖2 ≤ σ.Let x# by a solution of

minz‖z‖1 such that ‖Az− y‖2 ≤ σ.

Then

‖x− x#‖2 ≤ Cσs(x)1√

s+ Dσ

and‖x− x#‖1 ≤ Cσs(x)1 + D

√sσ

for constants C ,D > 0, that depend only on δ2s .

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 11

Page 20: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Matrices satisfying the RIP

Open problem: Give explicit matrices A ∈ Cm×N with smallδ2s ≤ 0.62 for “large” s.

Goal: δs ≤ δ, ifm ≥ Cδs lnα(N),

for constants Cδ and α.

Deterministic matrices known, for which m ≥ Cδ,ks2 suffices if

N ≤ mk .

Way out: consider random matrices.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 12

Page 21: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Matrices satisfying the RIP

Open problem: Give explicit matrices A ∈ Cm×N with smallδ2s ≤ 0.62 for “large” s.

Goal: δs ≤ δ, ifm ≥ Cδs lnα(N),

for constants Cδ and α.

Deterministic matrices known, for which m ≥ Cδ,ks2 suffices if

N ≤ mk .

Way out: consider random matrices.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 12

Page 22: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Matrices satisfying the RIP

Open problem: Give explicit matrices A ∈ Cm×N with smallδ2s ≤ 0.62 for “large” s.

Goal: δs ≤ δ, ifm ≥ Cδs lnα(N),

for constants Cδ and α.

Deterministic matrices known, for which m ≥ Cδ,ks2 suffices if

N ≤ mk .

Way out: consider random matrices.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 12

Page 23: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

RIP for Gaussian and Bernoulli matrices

Gaussian: entries of A independent N (0, 1) random variablesBernoulli : entries of A independent Bernoulli ±1 distributed rv

Theorem

Let A ∈ Rm×N be a Gaussian or Bernoulli random matrix andassume

m ≥ Cδ−2(s ln(eN/s) + ln(2ε−1))

for a universal constant C > 0. Then with probability at least1− ε the restricted isometry constant of 1√

mA satisfies δs ≤ δ.

Consequence: Recovery via `1-minimization with probabilityexceeding 1− e−cm provided

m ≥ Cs ln(eN/s).

Bound is optimal as follows from lower bound for Gelfand widthsof `p-balls, 0 < p ≤ 1.(Gluskin, Garnaev 1984 — Foucart, Pajor, R, Ullrich 2010)

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 13

Page 24: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Back to Sampling

(ψj)j∈Γ bounded orthonormal system:

maxj∈Γ‖ψj‖∞ ≤ K

Sampling points t1, . . . , tm are chosen i.i.d. according toorthogonalization measure ν.Sampling matrix A with entries A`,j = ψj(t`) is random matrix.

Theorem (Candes, Tao ’06 – Rudelson, Vershynin ’06 – R ’08/’10)

If m ≥ CK 2δ−2s max{ln3(s) ln(N), ln(ε−1)}, then the restrictedisometry constant of 1√

mA satisfies δs ≤ δ with probability at least

1− ε.

Implies stable recovery of all s-sparse f from m ≥ CK s ln4(N)random samples via `1-minimization with high probability.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 14

Page 25: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Trigonometric Polynomials

D = [0, 1]d , ν: Lebesgue measure

ψj(t) = e2πij ·t , j ∈ Zd , t ∈ [0, 1]d

Boundedness constant K = 1

Exact recovery of s-sparse trigonometric polynomials fromm ≥ Cs ln3(s) ln(N) i.i.d. samples uniformly distributed on [0, 1]d

via `1-minimization.

Error estimate for general f (Fourier coefficients supported on Γ)

‖f − f ‖2 ≤ Cσs(f )1/√s ≤ Cs1/2−1/p ||| f ||| p,

‖f − f ‖∞ ≤ C s1−1/p ||| f ||| p,

‖f − f ‖∞ ≤ C

(m

ln(m)3 ln(N)

)1−1/p

||| f ||| p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 15

Page 26: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Chebyshev polynomials

Chebyshev-polynomials Cj , j = 0, 1, 2, . . .∫ 1

−1Cj(x)Ck(x)

dx

π√

1− x2= δj ,k , j , k ∈ N0,

‖C0‖∞ = 1 and ‖Cj‖∞ =√

2.

Stable recovery of polynomials that are s-sparse in the Chebyshevsystem from

m ≥ Cs ln3(s) ln(N)

samples drawn i.i.d. from the Chebyshev measure dxπ√

1−x2.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 16

Page 27: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Legendre polynomials

Legendre polynomials Lj , j = 0, 1, 2, . . .

1

2

∫ 1

−1Lj(x)Lk(x)dx = δj ,k , ‖Lj‖∞ ≤

√j + 1, j , k ∈ N0.

K = maxj=0,...,N−1 ‖Lj‖∞ =√N. Leads to bound

m ≥ CK 2s ln3(s) ln(N) = CNs ln3(s) ln(N).

Preconditioned system Qj(x) = v(x)Lj(x) withv(x) = (π/2)1/2(1− x2)1/4 satisfies∫ 1

−1Qj(x)Qk(x)

dx

π√

1− x2= δj ,k , ‖Qj‖∞ ≤

√3, j , k ∈ N0.

Stable recovery of polynomials that are s-sparse in the Chebyshevsystem from

m ≥ Cs ln3(s) ln(N)

samples drawn i.i.d. from the Chebyshev measure dxπ√

1−x2.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 17

Page 28: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Legendre polynomials

Legendre polynomials Lj , j = 0, 1, 2, . . .

1

2

∫ 1

−1Lj(x)Lk(x)dx = δj ,k , ‖Lj‖∞ ≤

√j + 1, j , k ∈ N0.

K = maxj=0,...,N−1 ‖Lj‖∞ =√N. Leads to bound

m ≥ CK 2s ln3(s) ln(N) = CNs ln3(s) ln(N).

Preconditioned system Qj(x) = v(x)Lj(x) withv(x) = (π/2)1/2(1− x2)1/4 satisfies∫ 1

−1Qj(x)Qk(x)

dx

π√

1− x2= δj ,k , ‖Qj‖∞ ≤

√3, j , k ∈ N0.

Stable recovery of polynomials that are s-sparse in the Chebyshevsystem from

m ≥ Cs ln3(s) ln(N)

samples drawn i.i.d. from the Chebyshev measure dxπ√

1−x2.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 17

Page 29: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Spherical harmonics

Y k` , −k ≤ ` ≤ k , k ∈ N0 : orthonormal system in L2(S2)

1

∫ 2π

0

∫ π

0Y k` (φ, θ)Y k ′

`′ (θ, φ) sin(θ)dφdθ = δ`,`′δk,k ′

(φ, θ) ∈ [0, 2π)× [0, π): spherical coordinates(x = cos(θ) sin(φ), y = sin(θ) sin(φ), z = cos(φ)) ∈ S2

With ultraspherical polynomials pαn ,

Y k` (φ, θ) = e ikφ(sin θ)|k|p

|k|`−|k|(cos θ), (φ, θ) ∈ [0, 2π)× [0, π)

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 18

Page 30: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Restricted Isometry Property for Spherical Harmonics

L∞-bound: ‖Y k` ‖∞ ≤ k1/2

Preconditioning I (Krasikov ’08)With w(θ, φ) = | sin(θ)|1/2

‖wY k` ‖∞ ≤ Ck1/4.

Preconditioning II (Burq, Dyatkov, Ward, Zworski ’12)With v(θ, φ) = | sin2(θ) cos(θ)|1/6,

‖vY k` ‖∞ ≤ Ck1/6.

RIP for associated preconditioned random sampling matrix1√mA ∈ Cm×N with sampling points drawn according to

ν(dθ, dφ) = v−2(θ, φ) sin(θ)dθdφ = | tan(θ)|1/3dθdφ with highprobability if

m ≥ CsN1/6log4(N).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 19

Page 31: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Restricted Isometry Property for Spherical Harmonics

L∞-bound: ‖Y k` ‖∞ ≤ k1/2

Preconditioning I (Krasikov ’08)With w(θ, φ) = | sin(θ)|1/2

‖wY k` ‖∞ ≤ Ck1/4.

Preconditioning II (Burq, Dyatkov, Ward, Zworski ’12)With v(θ, φ) = | sin2(θ) cos(θ)|1/6,

‖vY k` ‖∞ ≤ Ck1/6.

RIP for associated preconditioned random sampling matrix1√mA ∈ Cm×N with sampling points drawn according to

ν(dθ, dφ) = v−2(θ, φ) sin(θ)dθdφ = | tan(θ)|1/3dθdφ with highprobability if

m ≥ CsN1/6log4(N).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 19

Page 32: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Restricted Isometry Property for Spherical Harmonics

L∞-bound: ‖Y k` ‖∞ ≤ k1/2

Preconditioning I (Krasikov ’08)With w(θ, φ) = | sin(θ)|1/2

‖wY k` ‖∞ ≤ Ck1/4.

Preconditioning II (Burq, Dyatkov, Ward, Zworski ’12)With v(θ, φ) = | sin2(θ) cos(θ)|1/6,

‖vY k` ‖∞ ≤ Ck1/6.

RIP for associated preconditioned random sampling matrix1√mA ∈ Cm×N with sampling points drawn according to

ν(dθ, dφ) = v−2(θ, φ) sin(θ)dθdφ = | tan(θ)|1/3dθdφ with highprobability if

m ≥ CsN1/6log4(N).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 19

Page 33: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Questions

• Can we take into account if ‖ψj‖∞ is growing with j , i.e., notuniformly small?

• How can we combine sparsity with smoothness?

Use weights!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 20

Page 34: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Questions

• Can we take into account if ‖ψj‖∞ is growing with j , i.e., notuniformly small?

• How can we combine sparsity with smoothness?

Use weights!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 20

Page 35: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Trigonometric system: smoothness and weights

ψj(t) = e2πijt , j ∈ Z, t ∈ [0, 1]Derivatives satisfy ‖ψ′j‖∞ = 2π|j |, j ∈ Z.

For f (t) =∑

j xjψj(t) we have

‖f ‖∞ + ‖f ′‖∞ = ‖∑j

xjψj‖∞ + ‖∑j

xjψ′j‖∞

≤∑j∈Z|xj |(‖ψj‖∞ + ‖ψ′j‖∞)

=∑j∈Z|xj |(1 + 2π|j |) =: ‖x‖ω,1.

Weights model smoothness!

Combine with sparsity (compressibility)→ weighted `p-spaces with 0 < p ≤ 1

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 21

Page 36: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Trigonometric system: smoothness and weights

ψj(t) = e2πijt , j ∈ Z, t ∈ [0, 1]Derivatives satisfy ‖ψ′j‖∞ = 2π|j |, j ∈ Z.

For f (t) =∑

j xjψj(t) we have

‖f ‖∞ + ‖f ′‖∞ = ‖∑j

xjψj‖∞ + ‖∑j

xjψ′j‖∞

≤∑j∈Z|xj |(‖ψj‖∞ + ‖ψ′j‖∞)

=∑j∈Z|xj |(1 + 2π|j |) =: ‖x‖ω,1.

Weights model smoothness!

Combine with sparsity (compressibility)→ weighted `p-spaces with 0 < p ≤ 1

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 21

Page 37: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted norms and weighted sparsity

For a weight ω = (ωj)j∈Γ with ωj ≥ 1, introduce

‖x‖ω,p := (∑j∈Γ

|xj |pω2−pj )1/p, 0 < p ≤ 2.

Special cases: ‖x‖ω,1 =∑

j∈Γ |xj |ωj , ‖x‖ω,2 = ‖x‖2

Weighted sparsity

‖x‖ω,0 :=∑j :xj 6=0

ω2j

x is called weighted s-sparse if ‖x‖ω,0 ≤ s.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 22

Page 38: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted best approximation

Weighted best s-term approximation error

σs(x)ω,p := infz:‖z‖ω,0≤s

‖x− z‖ω,p

Weighted quasi-best s-term approximation error of xConsider nonincreasing rearrangement of (xjω

−1j )j∈Γ, i.e.,

permutation π such that

|xπ(1)|ω−1π(1) ≥ |xπ(2)|ω−1

π(2) ≥ · · ·

Choose largest k such that∑k

`=1 ω2π(`) ≤ s, set

S = {π(1), . . . , π(k)} and

σs(x)ω,p := ‖x− xS‖ω,p,

where (xS)j = xj for j ∈ S and (xS)j = 0 for j /∈ S . (Note that‖xS‖ω,0 ≤ s by construction.)

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 23

Page 39: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted best approximation

Weighted best s-term approximation error

σs(x)ω,p := infz:‖z‖ω,0≤s

‖x− z‖ω,p

Weighted quasi-best s-term approximation error of xConsider nonincreasing rearrangement of (xjω

−1j )j∈Γ, i.e.,

permutation π such that

|xπ(1)|ω−1π(1) ≥ |xπ(2)|ω−1

π(2) ≥ · · ·

Choose largest k such that∑k

`=1 ω2π(`) ≤ s, set

S = {π(1), . . . , π(k)} and

σs(x)ω,p := ‖x− xS‖ω,p,

where (xS)j = xj for j ∈ S and (xS)j = 0 for j /∈ S . (Note that‖xS‖ω,0 ≤ s by construction.)

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 23

Page 40: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted Stechkin estimate

Theorem

For a weight ω, a vector x, 0 < p < q ≤ 2 and s > ‖ω‖2∞,

σs(x)ω,q ≤ σs(x)ω,q ≤ (s − ‖ω‖2∞)1/q−1/p‖x‖ω,p.

If s ≥ 2‖ω‖2∞, say, then

σs(x)ω,q ≤ σs(x)ω,q ≤ Cp,qs1/q−1/p‖x‖ω,p, Cp,q = 21/p−1/q.

Lower bound on s natural because otherwise the single element setS = {j} with ωj = ‖ω‖∞ not allowed as support set.

Lemma

If s ≥ ‖w‖2∞, then σ3s(x)ω,p ≤ σs(x)ω,p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 24

Page 41: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted Stechkin estimate

Theorem

For a weight ω, a vector x, 0 < p < q ≤ 2 and s > ‖ω‖2∞,

σs(x)ω,q ≤ σs(x)ω,q ≤ (s − ‖ω‖2∞)1/q−1/p‖x‖ω,p.

If s ≥ 2‖ω‖2∞, say, then

σs(x)ω,q ≤ σs(x)ω,q ≤ Cp,qs1/q−1/p‖x‖ω,p, Cp,q = 21/p−1/q.

Lower bound on s natural because otherwise the single element setS = {j} with ωj = ‖ω‖∞ not allowed as support set.

Lemma

If s ≥ ‖w‖2∞, then σ3s(x)ω,p ≤ σs(x)ω,p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 24

Page 42: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted Stechkin estimate

Theorem

For a weight ω, a vector x, 0 < p < q ≤ 2 and s > ‖ω‖2∞,

σs(x)ω,q ≤ σs(x)ω,q ≤ (s − ‖ω‖2∞)1/q−1/p‖x‖ω,p.

If s ≥ 2‖ω‖2∞, say, then

σs(x)ω,q ≤ σs(x)ω,q ≤ Cp,qs1/q−1/p‖x‖ω,p, Cp,q = 21/p−1/q.

Lower bound on s natural because otherwise the single element setS = {j} with ωj = ‖ω‖∞ not allowed as support set.

Lemma

If s ≥ ‖w‖2∞, then σ3s(x)ω,p ≤ σs(x)ω,p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 24

Page 43: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted Stechkin estimate

Theorem

For a weight ω, a vector x, 0 < p < q ≤ 2 and s > ‖ω‖2∞,

σs(x)ω,q ≤ σs(x)ω,q ≤ (s − ‖ω‖2∞)1/q−1/p‖x‖ω,p.

If s ≥ 2‖ω‖2∞, say, then

σs(x)ω,q ≤ σs(x)ω,q ≤ Cp,qs1/q−1/p‖x‖ω,p, Cp,q = 21/p−1/q.

Lower bound on s natural because otherwise the single element setS = {j} with ωj = ‖ω‖∞ not allowed as support set.

Lemma

If s ≥ ‖w‖2∞, then σ3s(x)ω,p ≤ σs(x)ω,p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 24

Page 44: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

(Weighted) Compressive Sensing

Recover a weighted s-sparse (or weighted-compressible) vector xfrom measurements y = Ax, where A ∈ Cm×N with m < N.

Weighted `1-minimization

minz∈CN

‖z‖ω,1 subject to Az = y

“Noisy” version

minz∈CN

‖z‖ω,1 subject to ‖Az− y‖2 ≤ η

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 25

Page 45: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

(Weighted) Compressive Sensing

Recover a weighted s-sparse (or weighted-compressible) vector xfrom measurements y = Ax, where A ∈ Cm×N with m < N.

Weighted `1-minimization

minz∈CN

‖z‖ω,1 subject to Az = y

“Noisy” version

minz∈CN

‖z‖ω,1 subject to ‖Az− y‖2 ≤ η

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 25

Page 46: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted restricted isometry property (WRIP)

Definition

The weighted restricted isometry constant δω,s of a matrixA ∈ Cm×N is defined to be the smallest constant such that

(1− δω,s)‖x‖22 ≤ ‖Ax‖2

2 ≤ (1 + δω,s)‖x‖22

for all x ∈ CN with ‖x‖ω,0 =∑

`:x` 6=0 ω2j ≤ s.

Since ωj ≥ 1 by assumption, the “classical” RIP implies the WRIP,δω,s ≤ δ1,s = δs .

Alternative name:Weighted Uniform Uncertainty Principle (WUUP)

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 26

Page 47: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted restricted isometry property (WRIP)

Definition

The weighted restricted isometry constant δω,s of a matrixA ∈ Cm×N is defined to be the smallest constant such that

(1− δω,s)‖x‖22 ≤ ‖Ax‖2

2 ≤ (1 + δω,s)‖x‖22

for all x ∈ CN with ‖x‖ω,0 =∑

`:x` 6=0 ω2j ≤ s.

Since ωj ≥ 1 by assumption, the “classical” RIP implies the WRIP,δω,s ≤ δ1,s = δs .

Alternative name:Weighted Uniform Uncertainty Principle (WUUP)

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 26

Page 48: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Recovery via weighted `1-minimization

Theorem

Let A ∈ Cm×N and s ≥ 2‖ω‖2∞ such that δω,3s < 1/3. For x ∈ CN

and y = Ax + e with ‖e‖2 ≤ η let x] be a minimizer of

min ‖z‖ω,1 subject to ‖Az− y‖2 ≤ η.

Then

‖x− x]‖ω,1 ≤ C1σs(x)ω,1 + D1

√sη,

‖x− x]‖2 ≤ C2σs(x)ω,1√

s+ D2η.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 27

Page 49: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Function interpolation

{ψj}j∈Γ, finite ONS with respect to prob. measure ν.

Given samples y1 = f (t1), . . . , ym = f (tm) of f (t) =∑

j∈Γ xjψj(t)reconstruction amounts to solving

y = Ax

with sampling matrix A ∈ Cm×N , N = |Γ|, given by

A`k = ψk(t`).

Use weighted `1-minimization to recover weighted-sparse orweighted-compressible x when m < |Γ|.

Choose t1, . . . , tm i.i.d. at random according to ν in order toanalyze the WRIP of the sampling matrix.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 28

Page 50: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted RIP of random sampling matrix

ψj : D → C, j ∈ Γ, N = |Γ| <∞, ONS w.r.t. prob. measure ν.Weight ω with ωj ≥ ‖ψj‖∞.

Sampling points t1, . . . , tm taken i.i.d. at random according to ν.Random sampling matrix A ∈ Cm×N with entries A`j = ψj(t`).

Theorem

Ifm ≥ Cδ−2s ln3(s) ln(N)

then the weighted restricted isometry constant of 1√mA satisfies

δω,s ≤ δ with probability at least 1− N− ln3 N .

Generalizes previous results (Candes, Tao – Rudelson, Vershynin –Rauhut) for systems with ‖ψj‖∞ ≤ K for all j ∈ Γ, where thesufficient condition is m ≥ Cδ−2K 2s log4 N.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 29

Page 51: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Weighted RIP of random sampling matrix

ψj : D → C, j ∈ Γ, N = |Γ| <∞, ONS w.r.t. prob. measure ν.Weight ω with ωj ≥ ‖ψj‖∞.

Sampling points t1, . . . , tm taken i.i.d. at random according to ν.Random sampling matrix A ∈ Cm×N with entries A`j = ψj(t`).

Theorem

Ifm ≥ Cδ−2s ln3(s) ln(N)

then the weighted restricted isometry constant of 1√mA satisfies

δω,s ≤ δ with probability at least 1− N− ln3 N .

Generalizes previous results (Candes, Tao – Rudelson, Vershynin –Rauhut) for systems with ‖ψj‖∞ ≤ K for all j ∈ Γ, where thesufficient condition is m ≥ Cδ−2K 2s log4 N.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 29

Page 52: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Abstract weighted function spaces

Aω,p = {f : f (t) =∑j∈Γ

xjψj(t), ||| f |||ω,p := ‖x‖ω,p <∞}

If ωj ≥ ‖ψj‖∞ then‖f ‖∞ ≤ ||| f |||ω,1.

If ωj ≥ ‖ψj‖∞ + ‖ψ′j‖∞ (when D ⊂ R) then

‖f ‖∞ + ‖f ′‖∞ ≤ ||| f |||ω,1,

and so on...

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 30

Page 53: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Interpolation via weighted `1-minimization

Theorem

Assume N = |Γ| <∞, ωj ≥ ‖ψj‖∞ and 0 < p < 1. Chooset1, . . . , tm i.i.d. at random according to ν where m ≥ Cs log4(N)

for s ≥ 2‖ω‖2∞. Then with probability at least 1− N− ln3 N the

following holds for each f ∈ Aω,p. Let x] be the solution of

minz∈CN

‖z‖ω,1 subject to∑j∈Γ

zjψj(t`) = f (t`), ` = 1, . . . ,m

and set f ](t) =∑

j∈Γ x]j ψj(t). Then

‖f − f ]‖∞ ≤ ||| f − f ] |||ω,1≤ C1s1−1/p ||| f |||ω,p,

‖f − f ]‖L2ν≤ C2s

1/2−1/p ||| f |||ω,p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 31

Page 54: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Quasi-interpolation in infinite-dimensional spaces

|Γ| =∞, lim|j |→∞ ωj =∞ and ωj ≥ ‖ψj‖∞.

Theorem

Let f ∈ Aω,p for some 0 < p < 1, and set Γs = {j ∈ Γ : ω2j ≤ s/2}

for some s. Choose t1, . . . , tm i.i.d. at random according to ν wherem ≥ Cs log4(|Γs |). With η = ||| f |||ω,p/

√s let x] be the solution to

minz∈CΓs

‖z‖ω,1 subject to ‖(f (t`)−∑j∈Γs

zjψj(t`))m`=1‖2 ≤ η√m.

and put f ](t) =∑

j∈Γsx ]j ψj(t). Then with prob. ≥ 1− N− ln3(N)

‖f − f ]‖∞ ≤ ||| f − f ] |||ω,1≤ C1s1−1/p ||| f |||ω,p,

‖f − f ]‖L2ν≤ C2s

1/2−1/p ||| f |||ω,p.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 32

Page 55: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Numerical example I for the trigonometric system

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Original function Runge’s example f (x) = 1

1+25x2

Weights: wj = 1 + |j |20 Interpolation points chosen uniformlyat random from [−1, 1].

1 0.5 0 0.5 10.5

0

0.5

1Least squares solution

1 0.5 0 0.5 10.5

0

0.5Residual error

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Unweighted l1 minimizer

1 0.5 0 0.5 10.5

0

0.5Residual error

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Weighted l1 minimizer

1 0.5 0 0.5 10.5

0

0.5Residual error

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 33

Page 56: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Numerical example II for the trigonometric system

−1 −0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Original function f (x) = |x |

Weights: wj = 1 + |j |.20 Interpolation points chosen uniformlyat random from [−1, 1].

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Least squares solution

1 0.5 0 0.5 10.5

0

0.5Residual error

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Unweighted l1 minimizer

1 0.5 0 0.5 10.5

0

0.5Residual error

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Weighted l1 minimizer

1 0.5 0 0.5 10.5

0

0.5Residual error

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 34

Page 57: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Numerical example for Chebyshev polynomials

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Original function f (x) = 1

1+25x2

Weights: wj = 1 + j .20 Interpolation points chosen i.i.d. atrandom according to Chebyshev measuredν(x) = dx

π√

1−x2.

−1 −0.5 0 0.5 1−0.5

0

0.5

1Least squares solution

−1 −0.5 0 0.5 1−0.5

0

0.5Residual error

−1 −0.5 0 0.5 1−0.5

0

0.5

1Unweighted l1 minimizer

−1 −0.5 0 0.5 1−0.5

0

0.5Residual error

−1 −0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Weighted l1 minimizer

−1 −0.5 0 0.5 1−0.5

0

0.5Residual error

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 35

Page 58: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Numerical example for Legendre polynomials

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Original function

f (x) = 11+25x2

Weights: wj = 1 + j .20 Interpolation points chosen i.i.d. atrandom according to Chebyshev mea-sure.

1 0.5 0 0.5 10.5

0

0.5

1Least squares solution

1 0.5 0 0.5 10.5

0

0.5Residual error

1 0.5 0 0.5 10.5

0

0.5

1Unweighted l1 minimizer

1 0.5 0 0.5 10.5

0

0.5Residual error

1 0.5 0 0.5 10

0.2

0.4

0.6

0.8

1Weighted l1 minimizer

1 0.5 0 0.5 10.5

0

0.5Residual error

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 36

Page 59: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Back to spherical harmonics

Y k` , −k ≤ ` ≤ k , k ∈ N0 : spherical harmonics.

Recall:L∞-bound: ‖Y k

` ‖∞ ≤ k1/2

Preconditioned L∞-bound for v(θ, φ) = | sin2(θ) cos(θ)|1/6:

‖vY k` ‖∞ ≤ Ck1/6.

Weighted RIP:With weights ωk,` ≥ k1/6 the preconditioned random samplingmatrix 1√

mA ∈ Cm×N satisfies δω,s ≤ δ with high probability if

m ≥ Cδ−2s log4(N).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 37

Page 60: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Back to spherical harmonics

Y k` , −k ≤ ` ≤ k , k ∈ N0 : spherical harmonics.

Recall:L∞-bound: ‖Y k

` ‖∞ ≤ k1/2

Preconditioned L∞-bound for v(θ, φ) = | sin2(θ) cos(θ)|1/6:

‖vY k` ‖∞ ≤ Ck1/6.

Weighted RIP:With weights ωk,` ≥ k1/6 the preconditioned random samplingmatrix 1√

mA ∈ Cm×N satisfies δω,s ≤ δ with high probability if

m ≥ Cδ−2s log4(N).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 37

Page 61: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Comparison of error bounds

Error bound for reconstruction of f ∈ Aω,p fromm ≥ Cs ln3(s) ln(N) samples drawn i.i.d. at random from themeasure ν(dθ, dφ) = | tan(θ)|1/3 via weighted `1-minimization:

‖f − f ]‖∞ ≤ ||| f − f ] |||ω,1 ≤ Cs1−1/p ||| f |||ω,p, 0 < p < 1.

Compare to error estimate for unweighted `1-minimization:If m ≥ CN1/6s ln3(s) ln(N) then

||| f − f ] ||| 1 ≤ Cs1−1/p ||| f ||| p, 0 < p < 1.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 38

Page 62: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Numerical Experiments for Sparse Spherical HarmonicRecovery

Original function unweighted `1 ωk,` = k1/61

0

1

1

0

11

0.5

0

0.5

1

ωk,` = k1/2

Original function f (θ, φ) = 1|θ|2+1/10

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 39

Page 63: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

High dimensional function interpolation

Tensorized Chebyshev polynomials on D = [−1, 1]d

Ck(t) = Ck1(t1)Ck2(t2) · · ·Ckd (td), k ∈ Nd0

with Ck the L2-normalized Chebyshev polynomials on [−1, 1]. Then

1

2d

∫[−1,1]d

Ck(t)Cj(t)dt = δk,j, j, k ∈ Nd0 .

Expansions f (t) =∑

k∈Nd0xkCk(t) with ‖x‖p <∞ for 0 < p < 1

and large d (even d =∞) appear in parametric PDE’s (Cohen,DeVore, Schwab 2011, ...).

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 40

Page 64: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

(Weighted) sparse recovery for tensorized Chebyshevpolynomials

L∞-bound: ‖Ck‖∞ = 2‖k‖0/2.Curse of dimension: Classical RIP bound requires

m ≥ C2ds ln3(s) ln(N).

Weights: ωj = 2‖k‖0/2.Weighted RIP bound:

m ≥ Cs ln3(s) ln(N)

Approximate recovery requires x ∈ `ω,p!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 41

Page 65: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

(Weighted) sparse recovery for tensorized Chebyshevpolynomials

L∞-bound: ‖Ck‖∞ = 2‖k‖0/2.Curse of dimension: Classical RIP bound requires

m ≥ C2ds ln3(s) ln(N).

Weights: ωj = 2‖k‖0/2.Weighted RIP bound:

m ≥ Cs ln3(s) ln(N)

Approximate recovery requires x ∈ `ω,p!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 41

Page 66: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Comparison Classical Interpolationvs. Weighted `1-minimization

Classical bound

‖f − f ‖∞ ≤ Cm−r/d‖f ‖C r

Interpolation via `1-minimization

‖f − f ‖∞ ≤ C

(m

ln4(m)

)1−1/p

||| f |||ω,p, 0 < p < 1.

Better rate if 1/p − 1 > r/d , i.e.,

p <1

r/d + 1.

For instance, when r = d , then p < 1/2 sufficient.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 42

Page 67: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Avertisement

S. Foucart, H. Rauhut,A Mathematical Introduction to Compressive SensingApplied and Numerical Harmonic Analysis, Birkhauser, 2013.

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 43

Page 68: Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0

Rupert Lasser,

All the best for your retirement!

Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 44