Top Banner
Efficient Deterministic Compressed Sensing for Images with Chirps and Reed-Muller Sequences Kang-Yu (Connie) Ni, Arizona State University, joint with Somantika Datta, Prasun Mahanti, Svetlana Roudenko, Douglas Cochran Compressed Sensing: Overview " x = y " sensing matrix y data/measurements x sparse signal n " N n " 1 N " 1 want to recover 1. : k-sparse 2. : RIP (Restricted Isometry Property), e.g. random matrices 3. Practical reconstruction algorithms, e.g. l1 minimization *Candes, Romberg, Tao 2006 and Donoho 2006 x " k < n << N Sensing Matrix: Determinis7c Approach Why deterministic sensing? Explicit reconstruction algorithm Efficient storage Smaller error in reconstruction Existing works (1d signals) DeVore – via finite fields Chirp matrices – Applebaum, Howard, Searle, Calderbank 2 nd -order RM sequences – Howard, Calderbank, Searle, Jafarpour Indyk, Iwen, Herman, … see http://dsp.rice.edu/cs Need a method suitable for images Sta7s7cal Restricted Isometry Property is -StRIP if for k-sparse , holds with probability exceeding 1- (1 " # ) x 2 2 $%x 2 2 $ (1 + # ) x 2 2 " " ( k, ", # ) x " R N *Calderbank, Howard, Jafarpour, 2010 CS with Chirps and RM Sequences 2 nd -order Reed-Muller functions " P , b ( a) = 1 2 m i 2 b T a + Pa ( ) T a a , b #$ 2 m binary vectors of length m P : m % m binary symmetric matrix " r, m ( l ) = 1 n e 2 #i n ml + 2 #i n rl 2 r , m, l $% n discrete chirp signal ** Howard et al. Hadamard matrix Reed-Muller matrix (P is zero-diagonal) m=2 1 1 1 1 1 -1 1 -1 1 1 -1 -1 1 -1 -1 1 Fourier matrix chirp matrix n=3 * Applebaum et al. " = [ U P 1 U P 2 ! U P 2 m( m#1)/2 ] 2 m X 2 m (m+1)/2 ! " ### $ ### " = [ U r 1 U r 2 ! U r n ] n X n 2 ! " ### $ ### Pros: Outperform MP in recon error and computational complexity MP det CS with chirp Cons: Not suited for images 256 × 256 image with 10% sparsity k = 6,554, N = 65,536 rule of thumb n 22,670, but n ×N = image not sparse enough least squares problem becomes too large n > k log 2 (1 + N / k ) Chirp and RM Reconstruc7on Algorithms 2 m X 2 m (m+1)/2 O( knN ) O kn 2 log n ( ) N n = 65536 22670 " 2.89 Results n / N Image, Sparsity n / k noiselets chirp RM 25% Brain, 7% 3.6 25.2 dB 123 dB 119 dB 12.5% Vessel, 5% 2.5 10.1 dB 49.9 dB 10.6 dB 6.25% Man, 2.38% 2.6 14.5 dB 112 dB 109 dB " = U 1 U 2 U 3 U 4 [ ] " = U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8 [ ] " = U 1 U 2 U 3 U 4 U 5 U 6 U 7 U 8 U 9 U 10 U 11 U 12 U 13 U 14 U 15 U 16 [ ] Reconstruc7on Algorithm 0. Perform initial approximation and then get residual Repeat 1 - 3 until residual is sufficiently small 1 Detect support 2 Determine coefficients 3 Get residual y 0 = y " A ˜ z Construc7on of Sensing Matrices N = image size (expl: 512 X 512 = 2 18 ) n = N / 4 (expl: 2 16 ) Sensing matrix: Satisfy the Statistical Restricted Isometry Property " = [ U 1 # U 2 U 3 # U 4 ] 0. Ini7al Best Approxima7on of Solu7on Detection of the “bulk” of a signal Based on energy of wavelets concentrate on upper-left region y = "x = [ U 1 U 2 U 3 U 4 ] = U 1 x 1 + U 2 x 2 + U 3 x 3 + U 4 x 4 U 1 * y = U 1 * U 1 x 1 + U 1 * U 2 x 2 + U 1 * U 3 x 3 + U 1 * U 4 x 4 " x 1 x 1 x 2 x 3 x 4 " # $ $ $ $ % & ' ' ' ' Acknowledgement This work was partially supported by NSF-DMS FRG grant #0652833, NSF-DUE #0633033, ONR-BRC grant #N00014-08-1-1110 Robert Calderbank, Sina Jafarpour, Stephen Howard, Stephen Searle – discussions of their work in deterministic CS. Justin Romberg – advice about noiselets and l 1 algorithms. Jim Pipe – guidance about medical imaging, providing MRI images Email: [email protected] 2. Determine Coefficients by Least Squares A z = y solved by LSQR [Paige & Saunders] ˜ z = argmin z Az " y 2 U t = D v t U 1 n " k SNR(dB) = 10 log 10 || x actual || 2 || x actual " x recon || 2 [ ] Hard-threshold to obtain a set of locations, denoted by Let , the initial approx. is U 1 * y " A = U 1 " ˜ z = A * y 1. Detect Support by DCFT (or DCHT) From Update Let w ( t, l ) = n DFT y 0 ( l)v t ( l) { } , t = 1, 2, 3, 4, v t =1 st column of U t " = "# locations associated with d largest w ( t, l ) { } A = " # Conclusion Extend the utility of CS using deterministic matrices Demonstrate a method that supports imaging applications Ongoing works: Investigate more natural formulations for multi-dimensional signals Exploit deterministic CS with a priori knowledge on signals noiselets: random noiselet measurements* with l1 minimization** *Candes and Romberg, sparsity and incoherence in compressive sampling **Zhang, Yang, and Yin, YALL1: Your ALgorithms for L1 y = " x + μ μ : noise with standard deviation " n / N Image σ noiselets chirp RM 25% Brain 0 0.05 0.1 23.4 dB 16.5 dB 12.5 dB 28.4 dB 25.2 dB 21.3 dB 25.7 dB 24.9 dB 20.9 dB 25% Vessel 0 0.05 0.1 12.0 dB 6.9 dB 2.6 dB 14.1 dB 12.4 dB 9.9 dB 13.4 dB 12.3 dB 9.1 dB 25% Man 0 0.05 0.1 20.0 dB 16.2 dB 12.7 dB 23.2 dB 22.5 dB 20.2 dB 22.6 dB 21.7 dB 19.4 dB x : compressible (not sparsified) 1 1 1 1 e 2 "i 3 1 # 1 e 2 "i 3 2# 1 1 e 2 "i 3 1 # 2 e 2 "i 3 2# 2 $ % & & & & ' ( ) ) ) ) = : U 0
1

Efficient Deterministic Compressed Sensing for Images with ...asufrg/files/kni.pdfKang-Yu (Connie) Ni, Arizona State University, joint with Somantika Datta, Prasun Mahanti, Svetlana

Jul 24, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Efficient Deterministic Compressed Sensing for Images with ...asufrg/files/kni.pdfKang-Yu (Connie) Ni, Arizona State University, joint with Somantika Datta, Prasun Mahanti, Svetlana

Efficient Deterministic Compressed Sensing for Images with Chirps and Reed-Muller Sequences Kang-Yu (Connie) Ni, Arizona State University, joint with Somantika Datta, Prasun Mahanti, Svetlana Roudenko, Douglas Cochran

Compressed  Sensing:  Overview  

!

" x = y

!

" sensing matrixy data/measurementsx sparse signal

!

n " N n "1

!

N "1

want to recover!

1.  : k-sparse 2.  : RIP (Restricted Isometry Property), e.g. random matrices 3.  Practical reconstruction algorithms, e.g. l1  minimization

*Candes, Romberg, Tao 2006 and Donoho 2006

!

x

!

"

k < n << N

Sensing  Matrix:  Determinis7c  Approach  •  Why deterministic sensing?

–  Explicit reconstruction algorithm –  Efficient storage –  Smaller error in reconstruction

•  Existing works (1d signals) –  DeVore – via finite fields –  Chirp matrices – Applebaum, Howard, Searle, Calderbank –  2nd-order RM sequences – Howard, Calderbank, Searle, Jafarpour –  Indyk, Iwen, Herman, … see http://dsp.rice.edu/cs

•  Need a method suitable for images

Sta7s7cal  Restricted  Isometry  Property                    is -StRIP if for k-sparse , holds with probability exceeding 1-

!

(1"#) x 22$ %x 2

2$ (1+#) x 2

2

!

"

!

"

!

(k, ", #)

!

x " RN

*Calderbank, Howard, Jafarpour, 2010

CS  with  Chirps  and  RM  Sequences  •  2nd-order Reed-Muller functions

!

"P ,b (a) =12m

i2bT a+ Pa( )T a

a, b#$ 2m binary vectors of length m

P : m % m binary symmetric matrix

!

"r,m (l) =1ne2#in ml+ 2#i

n rl 2

r, m, l$% n

•  discrete chirp signal

 **  Howard  et  al.  

•  Hadamard matrix

   •  Reed-Muller matrix (P is zero-diagonal)

   

m=2! 1 1 1 1 1 -1 1 -1 1 1 -1 -1 1 -1 -1 1

•  Fourier matrix

•  chirp matrix

n=3!

 *  Applebaum  et  al.  

!

" = [UP1UP2

! UP2m(m#1) /2

]

2m X 2m (m+1)/2!

!

! " # # # $ # # #

!

" = [Ur1Ur2! Urn

]

n X n2!

!

! " # # # $ # # #

Pros: Outperform MP in recon error and computational complexity –  MP –  det CS with chirp

Cons: Not suited for images 256 × 256 image with 10% sparsity k = 6,554, N = 65,536 –  rule of thumb n ≈ 22,670,

but n ×N = image not sparse enough –  least squares problem becomes too large

!

n > k log2(1+ N /k)

Chirp  and  RM  Reconstruc7on  Algorithms  

2m X 2m (m+1)/2

!

O(knN)

!

O kn2 logn( )

!

Nn

=6553622670

" 2.89

Results  n / N Image,  Sparsity   n / k noiselets   chirp   RM  25%   Brain,  7%   3.6   25.2  dB  

 123  dB    

119  dB    

12.5%   Vessel,  5%   2.5   10.1  dB    

49.9  dB    

10.6  dB    

6.25%   Man,  2.38%   2.6   14.5  dB    

112  dB    

109  dB    

!

" = U1 U2 U3 U4[ ]

!

" = U1 U2 U3 U4 U5 U6 U7 U8[ ]

!

" = U1 U2 U3 U4 U5 U6 U7 U8 U9 U10 U11 U12 U13 U14 U15 U16[ ]

Reconstruc7on  Algorithm   0. Perform initial approximation and then get residual Repeat 1 - 3 until residual is sufficiently small

1  Detect support 2  Determine coefficients 3  Get residual

!

y0 = y " A˜ z

Construc7on  of  Sensing  Matrices  •  N = image size (expl: 512 X 512 = 218 ) •  n = N / 4 (expl: 216 ) •  Sensing matrix: •  Satisfy the Statistical Restricted Isometry Property

!

" = [ U1 #U2 U3 #U4 ]

0.  Ini7al  Best  Approxima7on  of  Solu7on  •  Detection of the “bulk” of a signal •  Based on energy of wavelets concentrate on upper-left region

!

y ="x = [ U1 U2 U3 U4 ] =U1x1 +U2x2 +U3x3 +U4x4

!

U1*y =U1

*U1x1 +U1*U2x2 +U1

*U3x3 +U1*U4x4 " x1

!

x1

x2

x3

x4

"

#

$ $ $ $

%

&

' ' ' '

Acknowledgement  •  This work was partially supported by NSF-DMS FRG grant #0652833, NSF-DUE

#0633033, ONR-BRC grant #N00014-08-1-1110 •  Robert Calderbank, Sina Jafarpour, Stephen Howard, Stephen Searle – discussions

of their work in deterministic CS. Justin Romberg – advice about noiselets and l1  algorithms. Jim Pipe – guidance about medical imaging, providing MRI images

Email: [email protected]

2.  Determine  Coefficients  by  Least  Squares

!

A z = y

solved by LSQR [Paige & Saunders]  

!

˜ z = argminz

A z " y2

!

Ut = DvtU1

!

n " k

!

SNR(dB) = 10 log10 || xactual ||2 || xactual " xrecon ||2[ ]

•  Hard-threshold to obtain a set of locations, denoted by •  Let , the initial approx. is

!

U1*y

!

"

!

A =U1 "

!

˜ z = A*y

1.  Detect  Support  by  DCFT  (or  DCHT)  •  From

•  Update

•  Let !

w(t, l) =n

DFT y0(l)vt (l){ }, t =1, 2, 3, 4, vt =1st column of Ut

!

" = "# locations associated with d largest w(t, l){ }

!

A ="#

Conclusion  •  Extend the utility of CS using deterministic matrices •  Demonstrate a method that supports imaging applications

Ongoing works: •  Investigate more natural formulations for multi-dimensional signals •  Exploit deterministic CS with a priori knowledge on signals

noiselets: random noiselet measurements* with l1 minimization** *Candes and Romberg, sparsity and incoherence in compressive sampling **Zhang, Yang, and Yin, YALL1: Your ALgorithms for L1

!

y = " x + µ

!

µ : noise with standard deviation "

n / N Image   σ noiselets   chirp   RM  25%   Brain   0  

0.05  0.1  

23.4  dB  16.5  dB  12.5  dB  

28.4  dB  25.2  dB  21.3  dB  

25.7  dB  24.9  dB  20.9  dB  

25%   Vessel   0  0.05  0.1  

12.0  dB      6.9  dB      2.6  dB  

14.1  dB  12.4  dB      9.9  dB  

13.4  dB  12.3  dB      9.1  dB  

25%   Man   0  0.05  0.1  

20.0  dB  16.2  dB  12.7  dB  

23.2  dB  22.5  dB  20.2  dB  

22.6  dB  21.7  dB  19.4  dB  

!

x : compressible (not sparsified)

!

1 1 1

1 e2"i3 1#1 e

2"i3 2#1

1 e2"i3 1#2 e

2"i3 2#2

$

%

& & & &

'

(

) ) ) )

=:U0