1 Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, 1 Guillermo Sapiro and Lawrence Carin Department of Electrical and Computer Engineering Duke University, Durham, NC, USA 1 Department of Electrical and Computer Engineering University of Minnesota, Minneapolis, MN, USA NIPS, December 2009 Zhou 2009
22
Embed
Non-Parametric Bayesian Dictionary Learning for Sparse
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
• Dictionaries adapted to the data under testImproved performance
Better interpretation
NIPS 2009
6
• Global objective function
• Sparse coding stage (fix the dictionary)
• Dictionary update stageMethod of optimal direction, MOD (fix the sparse codes):
K-SVD (fix the sparsity pattern, rank-1 approximation):
Dictionary Learning: General Approach
NIPS 2009
7
• Restrictions of previous dictionary learning approaches:The noise variance or sparsity level are assumed to be known.The size of the dictionary need to be set a priori.Only point estimates are provided.
• How to relax these restrictions?Introduce a non-parametric Bayesian dictionary learning approach.Use sparsity promoting priors instead of enforcing the sparsitylevel/noise variance.Preset a large dictionary size and let the data itself infer an appropriate dictionary size.
• Dictionary learning with beta process priorsthree stages: dictionary learning (enforced sparsitypattern), sparsity pattern update, pseudo weights update.
• The three models have apparent differences in the level of exploiting previous obtained information.
Model comparison
NIPS 2009
12
• Partitioning the whole date set to be
Instead of directly calculating
we first calculate
The posterior is then used as prior for D for calculating
Sequential learning for large data sets
NIPS 2009
13
• The noise variance/sparsity level need not be known.• The dictionary size is automatically inferred.• Training data are not required.• The average sparsity level of the representation is
inferred from the data itself, and based on the posterior, each sample has its own unique sparse representation.
• A single model applicable for gray-scale, RGB, and hyperspectral image denoising & inpainting.
Non-Parametric Bayesian Dictionary Learning
NIPS 2009
14
Image denoising
Noisy imageKSVD Denoising
mismatched varianceKSVD Denoising
matched variance BPFA Denoising Dictionaries
NIPS 2009
15
80% Pixels Missing
50% Pixels Missing
Image inpainting
Original image Restored imageCorrupted image Dictionary
Original image Restored imageCorrupted image Dictionary
NIPS 2009
16
• 480*321 RGB image, 80% missing
RGB image inpainting
Original imageRestored imageCorrupted image Dictionary
Learning round
PSN
R
NIPS 2009
17
RGB image inpainting
Original image
Restored image
Corrupted image
Dictionary
NIPS 2009
18
Corrupted image, 10.9475dB
spec
tral b
and_
100
Original imageRestored image,25.8839dB
Corrupted image, 10.9475dB
spec
tral b
and_
1
Original imageRestored image,25.8839dB
Hyperspectral image inpainting
150*150*210 hyperspectral urban image95% missing
NIPS 2009
19
Hyperspectral image inpainting
845*512*106 hyperspectral image98% missing
100 200 300 400 500
100
200
300
400
500
600
700
800
100 200 300 400 500
100
200
300
400
500
600
700
800
100 200 300 400 500
100
200
300
400
500
600
700
800
100 200 300 400 500
100
200
300
400
500
600
700
800
Spectral band 50 Spectral band 90
Original Restored Original RestoredNIPS 2009
20
Compressive sensing
Image size: 480 by 320, 2400 8 by 8 patches153600 coefficients are estimated