Interpolation via weighted ‘ 1 -minimization Holger Rauhut RWTH Aachen University Lehrstuhl C f¨ ur Mathematik (Analysis) Mathematical Analysis and Applications Workshop in honor of Rupert Lasser Helmholtz Zentrum M¨ unchen September 20, 2013 Joint work with Rachel Ward (University of Texas at Austin)
68
Embed
Interpolation via weighted 1-minimizationBound is optimal as follows from lower bound for Gelfand widths of ‘ p-balls, 0
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Interpolation via weighted `1-minimization
Holger RauhutRWTH Aachen University
Lehrstuhl C fur Mathematik (Analysis)
Mathematical Analysis and ApplicationsWorkshop in honor of Rupert Lasser
Helmholtz Zentrum MunchenSeptember 20, 2013
Joint work with Rachel Ward (University of Texas at Austin)
Function interpolation
AimGiven a function f : D → C on a domain D reconstruct orinterpolate f from sample values f (t1), . . . , f (tm).
Approaches
• (Linear) polynomial interpolation• assumes (classical) smoothness in order to achieve error rates• works with special interpolation points (e.g. Chebyshev points).
• Compressive sensing• reconstruction nonlinear• assumes sparsity (or compressibility) of a series expansion in
terms of a certain basis (e.g. trigonometric bases)• fewer (random!) sampling points than degrees of freedom
This talk: Combine sparsity and smoothness!
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 2
Function interpolation
AimGiven a function f : D → C on a domain D reconstruct orinterpolate f from sample values f (t1), . . . , f (tm).
Approaches
• (Linear) polynomial interpolation• assumes (classical) smoothness in order to achieve error rates• works with special interpolation points (e.g. Chebyshev points).
• Compressive sensing• reconstruction nonlinear• assumes sparsity (or compressibility) of a series expansion in
terms of a certain basis (e.g. trigonometric bases)• fewer (random!) sampling points than degrees of freedom
This talk: Combine sparsity and smoothness!
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 2
Function interpolation
AimGiven a function f : D → C on a domain D reconstruct orinterpolate f from sample values f (t1), . . . , f (tm).
Approaches
• (Linear) polynomial interpolation• assumes (classical) smoothness in order to achieve error rates• works with special interpolation points (e.g. Chebyshev points).
• Compressive sensing• reconstruction nonlinear• assumes sparsity (or compressibility) of a series expansion in
terms of a certain basis (e.g. trigonometric bases)• fewer (random!) sampling points than degrees of freedom
This talk: Combine sparsity and smoothness!
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 2
A classical interpolation result
C r ([0, 1]d): r -times continuously differentiable periodic functions
Existence of set of sampling points t1, . . . , tm and linearreconstruction operator R : Cm → C r ([0, 1]d) such that for everyf ∈ C r ([0, 1]d) the approximation f = R(f (t1), . . . , f (tm)) satisfiesthe optimal error bound
‖f − f ‖∞ ≤ Cm−r/d‖f ‖C r .
Curse of dimension: Need about m ≥ Cf ε−d/r samples for
achieving error ε < 1.
Exponential scaling in d cannot be avoided using only smoothness(DeVore, Howard, Micchelli 1989 – Novak, Wozniakowski 2009).
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 3
A classical interpolation result
C r ([0, 1]d): r -times continuously differentiable periodic functions
Existence of set of sampling points t1, . . . , tm and linearreconstruction operator R : Cm → C r ([0, 1]d) such that for everyf ∈ C r ([0, 1]d) the approximation f = R(f (t1), . . . , f (tm)) satisfiesthe optimal error bound
‖f − f ‖∞ ≤ Cm−r/d‖f ‖C r .
Curse of dimension: Need about m ≥ Cf ε−d/r samples for
achieving error ε < 1.
Exponential scaling in d cannot be avoided using only smoothness(DeVore, Howard, Micchelli 1989 – Novak, Wozniakowski 2009).
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 3
Sparse representation of functions
D: domain endowed with a probability measure νψj : D → C, j ∈ Γ (finite or infinite){ψj}j∈Γ orthonormal system:∫
Dψj(t)ψk(t)dν(t) = δj ,k , j , k ∈ Γ
We consider functions of the form
f (t) =∑j∈Γ
xjψj(t)
f is called s-sparse if ‖x‖0 := {` : x` 6= 0} ≤ s and compressible ifthe error of best s-term approximation error
σs(f )q := σs(x)q := infz:‖z‖0≤s
‖x− z‖q (0 < q ≤ ∞)
is small.Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 4
Fourier Algebra and Compressibility
Fourier algebra Ap = {f ∈ C [0, 1] : ||| f ||| p <∞}, 0 < p ≤ 1,
||| f ||| p:= ‖x‖p = (∑j∈Z|xj |p)1/p, f (t) =
∑j∈Z
xjψj(t).
Motivating example: D = [0, 1], ν Lebesgue measure,
ψj(t) = e2πijt , t ∈ [0, 1]d , j ∈ Zd .
Compressibility via Stechkin estimate
σs(f )q = σs(x)q ≤ s1/q−1/p‖x‖p = s1/q−1/p ||| f ||| p.
Since ‖f ‖∞ := supx∈[0,1] |f (t)| ≤ ||| f ||| 1, the best s-termapproximation f0 =
∑j∈S xjψj , |S | = s, satisfies
‖f − f0‖∞ ≤ s1−1/p ||| f ||| p.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 5
Fourier Algebra and Compressibility
Fourier algebra Ap = {f ∈ C [0, 1] : ||| f ||| p <∞}, 0 < p ≤ 1,
||| f ||| p:= ‖x‖p = (∑j∈Z|xj |p)1/p, f (t) =
∑j∈Z
xjψj(t).
Motivating example: D = [0, 1], ν Lebesgue measure,
ψj(t) = e2πijt , t ∈ [0, 1]d , j ∈ Zd .
Compressibility via Stechkin estimate
σs(f )q = σs(x)q ≤ s1/q−1/p‖x‖p = s1/q−1/p ||| f ||| p.
Since ‖f ‖∞ := supx∈[0,1] |f (t)| ≤ ||| f ||| 1, the best s-termapproximation f0 =
∑j∈S xjψj , |S | = s, satisfies
‖f − f0‖∞ ≤ s1−1/p ||| f ||| p.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 5
Sampling
Task: Reconstruct sparse or compressible
f (t) =∑j∈Γ
xjψj(t)
from samples f (t1), . . . , f (tm) with given sampling pointst1, . . . , tm ∈ D.
With y` = f (t`), ` = 1, . . . ,m and sampling matrix
A`,j = ψj(t`), ` = 1, . . . ,m; j ∈ Γ
we can writey = Ax
Aim: Minimal number m of required samples, m� N = |Γ|.
Leads to underdetermined linear system.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 6
Sampling
Task: Reconstruct sparse or compressible
f (t) =∑j∈Γ
xjψj(t)
from samples f (t1), . . . , f (tm) with given sampling pointst1, . . . , tm ∈ D.
With y` = f (t`), ` = 1, . . . ,m and sampling matrix
A`,j = ψj(t`), ` = 1, . . . ,m; j ∈ Γ
we can writey = Ax
Aim: Minimal number m of required samples, m� N = |Γ|.
Leads to underdetermined linear system.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 6
Detour: Compressive sensing
Reconstruction of x ∈ CN from y = Ax with A ∈ Cm×N , m� N.
Ingredients:
• Sparsity / Compressibility
• Efficient reconstruction algorithms
• Randomness / Random matrices
Applications in Signal / Image Processing:
Radar, Magnetic Resonance Imaging, Optics, Statistics,Astronomy, ...
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 7
Detour: Compressive sensing
Reconstruction of x ∈ CN from y = Ax with A ∈ Cm×N , m� N.
Ingredients:
• Sparsity / Compressibility
• Efficient reconstruction algorithms
• Randomness / Random matrices
Applications in Signal / Image Processing:
Radar, Magnetic Resonance Imaging, Optics, Statistics,Astronomy, ...
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 7
Reconstruction via `1-minimization
`0-minimization
min ‖z‖0 subject to Az = y
is NP-hard.
Convex relaxation: `1-minimization
min ‖z‖1 subject to Az = y
Version for noisy data:
min ‖z‖1 subject to ‖Az− y‖2 ≤ η.
Alternatives:Orthogonal Matching Pursuit, Iterative Hard Thresholding,CoSaMP, Iteratively Reweighted Least Squares,...
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 8
Reconstruction via `1-minimization
`0-minimization
min ‖z‖0 subject to Az = y
is NP-hard.
Convex relaxation: `1-minimization
min ‖z‖1 subject to Az = y
Version for noisy data:
min ‖z‖1 subject to ‖Az− y‖2 ≤ η.
Alternatives:Orthogonal Matching Pursuit, Iterative Hard Thresholding,CoSaMP, Iteratively Reweighted Least Squares,...
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 8
Restricted Isometry Property (RIP)
Recovery guarantees, Error Estimates?
Definition
The restricted isometry constant δs of a matrix A ∈ Cm×N isdefined as the smallest δs such that
(1− δs)‖x‖22 ≤ ‖Ax‖2
2 ≤ (1 + δs)‖x‖22
for all s-sparse x ∈ CN .
Requires that all s-column submatrices of A are well-conditioned.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 9
Restricted Isometry Property (RIP)
Recovery guarantees, Error Estimates?
Definition
The restricted isometry constant δs of a matrix A ∈ Cm×N isdefined as the smallest δs such that
(1− δs)‖x‖22 ≤ ‖Ax‖2
2 ≤ (1 + δs)‖x‖22
for all s-sparse x ∈ CN .
Requires that all s-column submatrices of A are well-conditioned.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 9
41 ≈ 0.6246. Let x ∈ CN , andassume that noisy data are observed, y = Ax + η with ‖η‖2 ≤ σ.Let x# by a solution of
minz‖z‖1 such that ‖Az− y‖2 ≤ σ.
Then
‖x− x#‖2 ≤ Cσs(x)1√
s+ Dσ
and‖x− x#‖1 ≤ Cσs(x)1 + D
√sσ
for constants C ,D > 0, that depend only on δ2s .
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 11
Matrices satisfying the RIP
Open problem: Give explicit matrices A ∈ Cm×N with smallδ2s ≤ 0.62 for “large” s.
Goal: δs ≤ δ, ifm ≥ Cδs lnα(N),
for constants Cδ and α.
Deterministic matrices known, for which m ≥ Cδ,ks2 suffices if
N ≤ mk .
Way out: consider random matrices.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 12
Matrices satisfying the RIP
Open problem: Give explicit matrices A ∈ Cm×N with smallδ2s ≤ 0.62 for “large” s.
Goal: δs ≤ δ, ifm ≥ Cδs lnα(N),
for constants Cδ and α.
Deterministic matrices known, for which m ≥ Cδ,ks2 suffices if
N ≤ mk .
Way out: consider random matrices.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 12
Matrices satisfying the RIP
Open problem: Give explicit matrices A ∈ Cm×N with smallδ2s ≤ 0.62 for “large” s.
Goal: δs ≤ δ, ifm ≥ Cδs lnα(N),
for constants Cδ and α.
Deterministic matrices known, for which m ≥ Cδ,ks2 suffices if
N ≤ mk .
Way out: consider random matrices.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 12
RIP for Gaussian and Bernoulli matrices
Gaussian: entries of A independent N (0, 1) random variablesBernoulli : entries of A independent Bernoulli ±1 distributed rv
Theorem
Let A ∈ Rm×N be a Gaussian or Bernoulli random matrix andassume
m ≥ Cδ−2(s ln(eN/s) + ln(2ε−1))
for a universal constant C > 0. Then with probability at least1− ε the restricted isometry constant of 1√
mA satisfies δs ≤ δ.
Consequence: Recovery via `1-minimization with probabilityexceeding 1− e−cm provided
m ≥ Cs ln(eN/s).
Bound is optimal as follows from lower bound for Gelfand widthsof `p-balls, 0 < p ≤ 1.(Gluskin, Garnaev 1984 — Foucart, Pajor, R, Ullrich 2010)
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 13
Back to Sampling
(ψj)j∈Γ bounded orthonormal system:
maxj∈Γ‖ψj‖∞ ≤ K
Sampling points t1, . . . , tm are chosen i.i.d. according toorthogonalization measure ν.Sampling matrix A with entries A`,j = ψj(t`) is random matrix.
Theorem (Candes, Tao ’06 – Rudelson, Vershynin ’06 – R ’08/’10)
If m ≥ CK 2δ−2s max{ln3(s) ln(N), ln(ε−1)}, then the restrictedisometry constant of 1√
mA satisfies δs ≤ δ with probability at least
1− ε.
Implies stable recovery of all s-sparse f from m ≥ CK s ln4(N)random samples via `1-minimization with high probability.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 14
Trigonometric Polynomials
D = [0, 1]d , ν: Lebesgue measure
ψj(t) = e2πij ·t , j ∈ Zd , t ∈ [0, 1]d
Boundedness constant K = 1
Exact recovery of s-sparse trigonometric polynomials fromm ≥ Cs ln3(s) ln(N) i.i.d. samples uniformly distributed on [0, 1]d
via `1-minimization.
Error estimate for general f (Fourier coefficients supported on Γ)
‖f − f ‖2 ≤ Cσs(f )1/√s ≤ Cs1/2−1/p ||| f ||| p,
‖f − f ‖∞ ≤ C s1−1/p ||| f ||| p,
‖f − f ‖∞ ≤ C
(m
ln(m)3 ln(N)
)1−1/p
||| f ||| p.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 15
Chebyshev polynomials
Chebyshev-polynomials Cj , j = 0, 1, 2, . . .∫ 1
−1Cj(x)Ck(x)
dx
π√
1− x2= δj ,k , j , k ∈ N0,
‖C0‖∞ = 1 and ‖Cj‖∞ =√
2.
Stable recovery of polynomials that are s-sparse in the Chebyshevsystem from
m ≥ Cs ln3(s) ln(N)
samples drawn i.i.d. from the Chebyshev measure dxπ√
1−x2.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 16
Lower bound on s natural because otherwise the single element setS = {j} with ωj = ‖ω‖∞ not allowed as support set.
Lemma
If s ≥ ‖w‖2∞, then σ3s(x)ω,p ≤ σs(x)ω,p.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 24
(Weighted) Compressive Sensing
Recover a weighted s-sparse (or weighted-compressible) vector xfrom measurements y = Ax, where A ∈ Cm×N with m < N.
Weighted `1-minimization
minz∈CN
‖z‖ω,1 subject to Az = y
“Noisy” version
minz∈CN
‖z‖ω,1 subject to ‖Az− y‖2 ≤ η
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 25
(Weighted) Compressive Sensing
Recover a weighted s-sparse (or weighted-compressible) vector xfrom measurements y = Ax, where A ∈ Cm×N with m < N.
Weighted `1-minimization
minz∈CN
‖z‖ω,1 subject to Az = y
“Noisy” version
minz∈CN
‖z‖ω,1 subject to ‖Az− y‖2 ≤ η
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 25
Weighted restricted isometry property (WRIP)
Definition
The weighted restricted isometry constant δω,s of a matrixA ∈ Cm×N is defined to be the smallest constant such that
(1− δω,s)‖x‖22 ≤ ‖Ax‖2
2 ≤ (1 + δω,s)‖x‖22
for all x ∈ CN with ‖x‖ω,0 =∑
`:x` 6=0 ω2j ≤ s.
Since ωj ≥ 1 by assumption, the “classical” RIP implies the WRIP,δω,s ≤ δ1,s = δs .
Alternative name:Weighted Uniform Uncertainty Principle (WUUP)
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 26
Weighted restricted isometry property (WRIP)
Definition
The weighted restricted isometry constant δω,s of a matrixA ∈ Cm×N is defined to be the smallest constant such that
(1− δω,s)‖x‖22 ≤ ‖Ax‖2
2 ≤ (1 + δω,s)‖x‖22
for all x ∈ CN with ‖x‖ω,0 =∑
`:x` 6=0 ω2j ≤ s.
Since ωj ≥ 1 by assumption, the “classical” RIP implies the WRIP,δω,s ≤ δ1,s = δs .
Alternative name:Weighted Uniform Uncertainty Principle (WUUP)
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 26
Recovery via weighted `1-minimization
Theorem
Let A ∈ Cm×N and s ≥ 2‖ω‖2∞ such that δω,3s < 1/3. For x ∈ CN
and y = Ax + e with ‖e‖2 ≤ η let x] be a minimizer of
min ‖z‖ω,1 subject to ‖Az− y‖2 ≤ η.
Then
‖x− x]‖ω,1 ≤ C1σs(x)ω,1 + D1
√sη,
‖x− x]‖2 ≤ C2σs(x)ω,1√
s+ D2η.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 27
Function interpolation
{ψj}j∈Γ, finite ONS with respect to prob. measure ν.
Given samples y1 = f (t1), . . . , ym = f (tm) of f (t) =∑
j∈Γ xjψj(t)reconstruction amounts to solving
y = Ax
with sampling matrix A ∈ Cm×N , N = |Γ|, given by
A`k = ψk(t`).
Use weighted `1-minimization to recover weighted-sparse orweighted-compressible x when m < |Γ|.
Choose t1, . . . , tm i.i.d. at random according to ν in order toanalyze the WRIP of the sampling matrix.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 28
Weighted RIP of random sampling matrix
ψj : D → C, j ∈ Γ, N = |Γ| <∞, ONS w.r.t. prob. measure ν.Weight ω with ωj ≥ ‖ψj‖∞.
Sampling points t1, . . . , tm taken i.i.d. at random according to ν.Random sampling matrix A ∈ Cm×N with entries A`j = ψj(t`).
Theorem
Ifm ≥ Cδ−2s ln3(s) ln(N)
then the weighted restricted isometry constant of 1√mA satisfies
δω,s ≤ δ with probability at least 1− N− ln3 N .
Generalizes previous results (Candes, Tao – Rudelson, Vershynin –Rauhut) for systems with ‖ψj‖∞ ≤ K for all j ∈ Γ, where thesufficient condition is m ≥ Cδ−2K 2s log4 N.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 29
Weighted RIP of random sampling matrix
ψj : D → C, j ∈ Γ, N = |Γ| <∞, ONS w.r.t. prob. measure ν.Weight ω with ωj ≥ ‖ψj‖∞.
Sampling points t1, . . . , tm taken i.i.d. at random according to ν.Random sampling matrix A ∈ Cm×N with entries A`j = ψj(t`).
Theorem
Ifm ≥ Cδ−2s ln3(s) ln(N)
then the weighted restricted isometry constant of 1√mA satisfies
δω,s ≤ δ with probability at least 1− N− ln3 N .
Generalizes previous results (Candes, Tao – Rudelson, Vershynin –Rauhut) for systems with ‖ψj‖∞ ≤ K for all j ∈ Γ, where thesufficient condition is m ≥ Cδ−2K 2s log4 N.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 29
Abstract weighted function spaces
Aω,p = {f : f (t) =∑j∈Γ
xjψj(t), ||| f |||ω,p := ‖x‖ω,p <∞}
If ωj ≥ ‖ψj‖∞ then‖f ‖∞ ≤ ||| f |||ω,1.
If ωj ≥ ‖ψj‖∞ + ‖ψ′j‖∞ (when D ⊂ R) then
‖f ‖∞ + ‖f ′‖∞ ≤ ||| f |||ω,1,
and so on...
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 30
Interpolation via weighted `1-minimization
Theorem
Assume N = |Γ| <∞, ωj ≥ ‖ψj‖∞ and 0 < p < 1. Chooset1, . . . , tm i.i.d. at random according to ν where m ≥ Cs log4(N)
for s ≥ 2‖ω‖2∞. Then with probability at least 1− N− ln3 N the
following holds for each f ∈ Aω,p. Let x] be the solution of
minz∈CN
‖z‖ω,1 subject to∑j∈Γ
zjψj(t`) = f (t`), ` = 1, . . . ,m
and set f ](t) =∑
j∈Γ x]j ψj(t). Then
‖f − f ]‖∞ ≤ ||| f − f ] |||ω,1≤ C1s1−1/p ||| f |||ω,p,
‖f − f ]‖L2ν≤ C2s
1/2−1/p ||| f |||ω,p.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 31
Quasi-interpolation in infinite-dimensional spaces
|Γ| =∞, lim|j |→∞ ωj =∞ and ωj ≥ ‖ψj‖∞.
Theorem
Let f ∈ Aω,p for some 0 < p < 1, and set Γs = {j ∈ Γ : ω2j ≤ s/2}
for some s. Choose t1, . . . , tm i.i.d. at random according to ν wherem ≥ Cs log4(|Γs |). With η = ||| f |||ω,p/
√s let x] be the solution to
minz∈CΓs
‖z‖ω,1 subject to ‖(f (t`)−∑j∈Γs
zjψj(t`))m`=1‖2 ≤ η√m.
and put f ](t) =∑
j∈Γsx ]j ψj(t). Then with prob. ≥ 1− N− ln3(N)
‖f − f ]‖∞ ≤ ||| f − f ] |||ω,1≤ C1s1−1/p ||| f |||ω,p,
‖f − f ]‖L2ν≤ C2s
1/2−1/p ||| f |||ω,p.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 32
Numerical example I for the trigonometric system
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Original function Runge’s example f (x) = 1
1+25x2
Weights: wj = 1 + |j |20 Interpolation points chosen uniformlyat random from [−1, 1].
1 0.5 0 0.5 10.5
0
0.5
1Least squares solution
1 0.5 0 0.5 10.5
0
0.5Residual error
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Unweighted l1 minimizer
1 0.5 0 0.5 10.5
0
0.5Residual error
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Weighted l1 minimizer
1 0.5 0 0.5 10.5
0
0.5Residual error
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 33
Numerical example II for the trigonometric system
−1 −0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Original function f (x) = |x |
Weights: wj = 1 + |j |.20 Interpolation points chosen uniformlyat random from [−1, 1].
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Least squares solution
1 0.5 0 0.5 10.5
0
0.5Residual error
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Unweighted l1 minimizer
1 0.5 0 0.5 10.5
0
0.5Residual error
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Weighted l1 minimizer
1 0.5 0 0.5 10.5
0
0.5Residual error
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 34
Numerical example for Chebyshev polynomials
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Original function f (x) = 1
1+25x2
Weights: wj = 1 + j .20 Interpolation points chosen i.i.d. atrandom according to Chebyshev measuredν(x) = dx
π√
1−x2.
−1 −0.5 0 0.5 1−0.5
0
0.5
1Least squares solution
−1 −0.5 0 0.5 1−0.5
0
0.5Residual error
−1 −0.5 0 0.5 1−0.5
0
0.5
1Unweighted l1 minimizer
−1 −0.5 0 0.5 1−0.5
0
0.5Residual error
−1 −0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Weighted l1 minimizer
−1 −0.5 0 0.5 1−0.5
0
0.5Residual error
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 35
Numerical example for Legendre polynomials
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Original function
f (x) = 11+25x2
Weights: wj = 1 + j .20 Interpolation points chosen i.i.d. atrandom according to Chebyshev mea-sure.
1 0.5 0 0.5 10.5
0
0.5
1Least squares solution
1 0.5 0 0.5 10.5
0
0.5Residual error
1 0.5 0 0.5 10.5
0
0.5
1Unweighted l1 minimizer
1 0.5 0 0.5 10.5
0
0.5Residual error
1 0.5 0 0.5 10
0.2
0.4
0.6
0.8
1Weighted l1 minimizer
1 0.5 0 0.5 10.5
0
0.5Residual error
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 36
Back to spherical harmonics
Y k` , −k ≤ ` ≤ k , k ∈ N0 : spherical harmonics.
Recall:L∞-bound: ‖Y k
` ‖∞ ≤ k1/2
Preconditioned L∞-bound for v(θ, φ) = | sin2(θ) cos(θ)|1/6:
‖vY k` ‖∞ ≤ Ck1/6.
Weighted RIP:With weights ωk,` ≥ k1/6 the preconditioned random samplingmatrix 1√
mA ∈ Cm×N satisfies δω,s ≤ δ with high probability if
m ≥ Cδ−2s log4(N).
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 37
Back to spherical harmonics
Y k` , −k ≤ ` ≤ k , k ∈ N0 : spherical harmonics.
Recall:L∞-bound: ‖Y k
` ‖∞ ≤ k1/2
Preconditioned L∞-bound for v(θ, φ) = | sin2(θ) cos(θ)|1/6:
‖vY k` ‖∞ ≤ Ck1/6.
Weighted RIP:With weights ωk,` ≥ k1/6 the preconditioned random samplingmatrix 1√
mA ∈ Cm×N satisfies δω,s ≤ δ with high probability if
m ≥ Cδ−2s log4(N).
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 37
Comparison of error bounds
Error bound for reconstruction of f ∈ Aω,p fromm ≥ Cs ln3(s) ln(N) samples drawn i.i.d. at random from themeasure ν(dθ, dφ) = | tan(θ)|1/3 via weighted `1-minimization:
‖f − f ]‖∞ ≤ ||| f − f ] |||ω,1 ≤ Cs1−1/p ||| f |||ω,p, 0 < p < 1.
Compare to error estimate for unweighted `1-minimization:If m ≥ CN1/6s ln3(s) ln(N) then
||| f − f ] ||| 1 ≤ Cs1−1/p ||| f ||| p, 0 < p < 1.
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 38
Numerical Experiments for Sparse Spherical HarmonicRecovery
Original function unweighted `1 ωk,` = k1/61
0
1
1
0
11
0.5
0
0.5
1
ωk,` = k1/2
Original function f (θ, φ) = 1|θ|2+1/10
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 39
High dimensional function interpolation
Tensorized Chebyshev polynomials on D = [−1, 1]d
Ck(t) = Ck1(t1)Ck2(t2) · · ·Ckd (td), k ∈ Nd0
with Ck the L2-normalized Chebyshev polynomials on [−1, 1]. Then
1
2d
∫[−1,1]d
Ck(t)Cj(t)dt = δk,j, j, k ∈ Nd0 .
Expansions f (t) =∑
k∈Nd0xkCk(t) with ‖x‖p <∞ for 0 < p < 1
and large d (even d =∞) appear in parametric PDE’s (Cohen,DeVore, Schwab 2011, ...).
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 40
(Weighted) sparse recovery for tensorized Chebyshevpolynomials
L∞-bound: ‖Ck‖∞ = 2‖k‖0/2.Curse of dimension: Classical RIP bound requires
m ≥ C2ds ln3(s) ln(N).
Weights: ωj = 2‖k‖0/2.Weighted RIP bound:
m ≥ Cs ln3(s) ln(N)
Approximate recovery requires x ∈ `ω,p!
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 41
(Weighted) sparse recovery for tensorized Chebyshevpolynomials
L∞-bound: ‖Ck‖∞ = 2‖k‖0/2.Curse of dimension: Classical RIP bound requires
m ≥ C2ds ln3(s) ln(N).
Weights: ωj = 2‖k‖0/2.Weighted RIP bound:
m ≥ Cs ln3(s) ln(N)
Approximate recovery requires x ∈ `ω,p!
Holger Rauhut, RWTH Aachen University Interpolation via Weighted `1-minimization 41