Page 1
BIORTHOGONAL WAVELET TRANSFORM
BASED IMAGE FUSION USING ABSOLUTE
MAXIMUM FUSION RULE
Om Prakash1,2
, Richa Srivastava1, Ashish Khare
1
1Image Processing and Computer Vision Lab
Department of Electronics and Communication
University of Allahabad, Allahabad, India 2Centre of Computer Education
University of Allahabad, Allahabad, India
{au.omprakash, gaur.richa}@gmail.com, [email protected]
Abstract - The objective of image fusion is to combine relevant
information from two or more images of the same scene into a
single composite image which is more informative and is more
suitable for human and machine perception. In recent past,
different methods of image fusion have been proposed in
literature both in spatial domain and wavelet domain. Spatial
domain based methods produce spatial distortions in the fused
image. Spatial domain distortion can be well handled by the use
of wavelet transform based image fusion methods. In this paper,
we propose a pixel-level image fusion scheme using
multiresolution Biorthogonal wavelet transform (BWT). Wavelet
coefficients at different decomposion levels are fused using
absolute maximum fusion rule. Two important properties
wavelet symmetry and linear phase of BWT have been exploited
for image fusion because they are capable to preserve edge
information and hence reducing the distortions in the fused
image. The performance of the proposed method have been
extensively tested on several pairs of multifocus and multimodal
images both free from any noise and in presence of additive white
Gaussian noise and compared visually and quantitatively against
existing spatial domain methods. Experimental results show that
the proposed method improves fusion quality by reducing loss of
significant information available in individual images. Fusion
factor, entropy and standard deviation are used as quantitative
quality measures of the fused image.
Keywords Image fusion, multifocus images, multimodality, Biorthogonal
wavelet transform, Fusion rules.
I. INTRODUCTION
In computer vision [1] applications, one of the
challenging problems is to combine the relevant information
from various images of the same scene without introducing
artifacts in the resultant image. Because of the different types
of sensors [2,3] are used in image capturing devices and their
principle of sensing and also, due to the limited depth of focus
of optical lenses used in camera, it is possible to get several
images of the same scene providing different information.
Therefore, combining different information from several
images to get a new improved composite image becomes
important area of research. Image fusion applications are
found in diverse areas including medical imaging [5-7],
forensic science [8], remote sensing [9], surveillance [9] etc.
Various spatial domain [10,11] and frequency domain [12-14]
image fusion methods have been proposed in literature. Some
of the popular spatial domain methods are Arithmetic
Averaging, Principal Component Analysis (PCA) [11,15],
Sharpness criteria [16] and IHS (Intensity Hue Saturation)
[17] based fusion schemes. However, spatial domain image
fusion techniques often produce poor because they usually
produce edge distortions in the fused image. With the
improvement in the existing approaches, new image fusion
methods are regularly been proposed that address particular
problem with standard techniques. In recent years, wavelet
transform based image fusion methods getting popularity due
to their multi-resolution decomposition ability and hence
preserving significant content of the image.
There are two basic requirements for image fusion
[18,19]. First, fused image should possess all possible relevant
information contained in the source images; second, fusion
process should not introduce any artifact, noise or unexpected
feature in the fused image. Image fusion can be performed at
three levels – pixel level [18,19], region level [13,18,19] and
decision level [20]. Pixel level fusion deals with information
associated with each pixel. Each pixel value in the fused
image is determined from the corresponding pixel values of
source images. In feature level fusion, source images are
segmented into regions and features (like pixel intensities,
edges or texture features) and these features are used for
fusion. Decision level fusion is a high level fusion which uses
decisions coming from various fusing sensors. Decision level
fusion methods are based on some statistics, voting, fuzzy
logic, prediction and heuristics etc. Pixel level fusion methods
are easy to implement and provide original information in the
fused image. There are several methods available to
implement image fusion in wavelet transform domain which
are based on the multi scale decomposition of image. The
image fusion procedure mainly consists of two steps:
decomposition of source images and selection of coefficients
from the decomposed images i.e. fusion rule to be used.
Decomposition of image produces coefficients in transform
domain and fusion rule merges these coefficients without
loosing original information in the individual images and
without introducing any artifacts or inconsistencies.
Proceedings of 2013 IEEE International Conference on Information and Communication Technologies (ICT 2013)
978-1-4673-5758-6/13/$31.00 © 2013 IEEE 758
Page 2
Use of traditional wavelet transform based on Mallat
algorithm is complex as it uses convolution to process large
numbers of image data. So, it needs more memory space for
read/write operation, which is costly for real-time imaging
applications. In addition, orthogonal filter of wavelet
transform does not have the characteristics of linear phase,
therefore phase distortion will lead to the distortion of the
image edges and hence loss of important image content. To
overcome both of these shortcomings, linear phase and
symmetry properties [21,22] based biorthogonal wavelet
transform is used.
In this paper, we proposed image fusion scheme based on
biorthogonal wavelet transform which uses absolute maximum
selection fusion rule. The proposed method is compared with
traditional spatial domain based image fusion methods like
Linear fusion [25], Principal Component Analysis based
fusion [15] and Sharpness criteria based image fusion [16]..
For quantitative performance evaluation of the proposed
method, we have used three metrics: fusion factor (FF),
entropy (Q) and standard deviation (σ ). The qualitative and
quantitative analysis of experimental results show that the
proposed method of image fusion yield better results. The
proposed method is robust in the sense that it is capable of
fusing images corrupted with white gaussian noise of different
level of variance.
The organization of paper is as follows: section 2 presents
the overview of biorthogonal wavelet transform (BWT),
section 3 presents the proposed fusion method and in section
4, performance measures are given. In section 5, results of the
proposed method and its comparison with other image fusion
methods are presented. At last, in section 6, concluding
remarks are summarized.
II. BIORTHOGONAL WAVELET TRANSFORM
In many filtering applications we need filters with
symmetrical coefficient to achieve linear phase. None of the
orthogonal wavelet systems, except Haar, have symmetrical
coefficients. Biorthogonal wavelet system can be designed to
achieve symmetry property and exact reconstruction by using
two wavelet filters and two scaling filters instead of one
[21,22]. Biorthogonal family contains biorthogonal compactly
supported spline wavelets. With these wavelets symmetry and
perfect reconstruction is possible using FIR (Finite Impulse
Response) filters, which is impossible for the orthogonal
filters (except for the Haar filters). The biorthogonal family
uses separate wavelet and scaling functions for the analysis
and synthesis of image. The reverse biorthogonal family uses
the synthesis functions for the analysis and vice versa.
III. THE PROPOSED METHOD
The proposed method of image fusion uses the
biorthogonal wavelet transform for decomposion and
reconstruction of the source images. The overall fusion
scheme based on biorthogonal wavelet transform is shown in
Fig. 1.
Fig. 1: General biorthogonal wavelet based image fusion scheme
Firstly we decompose source images of same scene (can
have different focusing and modality) using Biorthogonal
wavelet transform (BWT) and then coefficients obtained are
merged using absolute maximum selection fusion rule. We
have used wavelet and scaling functions used in BWT for
decomposition of source images. The selection of proper
wavelet for decomposition varies from application to
application. No general selection criteria for wavelet and
scaling function is available [23] in literature. Although
vanishing moment and regularity (smoothness) of wavelet can
be considered to decide wavelet function [23]. For image
fusion application, selection of wavelet with sufficient
vanishing moment is desired. Therefore, we have used
biorthogonal filters to get desired number of vanishing
moments. The coefficients obtained by decomposition of
source images are fused using absolute maximum fusion rule,
described in section 3.2.
3.1 Usefulness of Biorthogonal Wavelet Transform for
denoising Biorthogonal Wavelet transform is useful for image
denoising because of its following properties:
(i) Availability of Linear Phase The orthogonal filter of wavelet transform does not have
the characteristics of linear phase; therefore, the phase
distortion will lead to the distortion of the image edge. To
make up for this shortcoming, the biorthogonal wavelet with
linear phase characteristic is introduced [21,22]. Biorthogonal
family contains biorthogonal compactly supported spline
wavelets. With these wavelets symmetry and perfect
reconstruction is possible using FIR (Finite Impulse Response)
filters, which is impossible for the orthogonal filters (except
for the Haar filters). The symmetry means that the filters have
linear phase. The biorthogonal family uses separate wavelet
and scaling functions for the analysis and synthesis of a signal.
The reverse biorthogonal family uses the synthesis functions
for analysis and vice versa.
3.2 Fusion Rules used In the proposed method of image fusion, we have fused
the biorthogonal wavelet coefficients by the absolute
maximum selection fusion rule. Suppose I1(x,y) and I2(x,y) are
the two images to be fused and their wavelet coefficients are
W1(m,n) and W2(m,n) respectively, then Absolute Maximum
Proceedings of 2013 IEEE International Conference on Information and Communication Technologies (ICT 2013)
978-1-4673-5758-6/13/$31.00 © 2013 IEEE 759
Page 3
Selection Fusion Rule is used to combine wavelet coefficients
as below-
{ 1 1 2
2 2 1
( , ), i ( , ) ( , )
( , ), i ( , ) ( , )( , )
W m n f W m n W m n
W m n f W m n W m nW m n
≥
>= (1)
IV. IMAGE FUSION PERFORMANCE MEASURES
The application area of image fusion determines the
evaluation method of fusion. In image fusion application, the
aim of fusion is to process the significant parts of source
images, for instance, the edges and regions with high contrast.
This type of evaluation is based on the perceptual information.
On the other hand, some quantitative measures can be used for
performance evaluation of fusion method.
4.1 Quantitative Evaluation
In the quantitative performance evaluation [26,27], we
evaluate fusion on the basis of statistical parameters of fused
image. Several parameters can be used for evaluating the
performance of fusion algorithm. In the proposed work we
have used three performance evaluation metrics namely fusion
factor (FF), information entropy (Q) and standard deviation
(σ ) of the original image and the fused image[26,27]. These
performance metrics are briefly introduced as follows.
(i) Standard Deviation (σ )
Standard deviation is the measure of the contrast of the
fused image and it can be calculated as-
( ) ( )1
0
L
Fii i h iσ
−
== − , ( )
1
0
L
F
i
i ih i−
=
= (2)
where i, Fh and L are the grey-level index, the normalized
histogram of the fused image, and the number of bins in
histogram, respectively. Higher the value of standard deviation
better is the quality fused image.
(ii) Information Entropy
Information entropy is the amount of information contained in
the fused image. It is calculated as follows:
1
2
0
logL
i i
i
Q P P−
=
= − (3)
where L is the number of gray level and Pi is the ratio between
the number of pixels with gray values i and total number of
pixels.
(iii) Fusion Factor (FF) It is the sum of the mutual information of source images
and fused image.
FF = MAF + MBF (4)
where MAF and MBF are mutual information between source
images and fused image.
Mutual information is a basic concept of information
theory measuring the amount of information that one image
contains about another. Thus, higher value of fusion factor
gives more information about image.
V. EXPERIMENTAL RESULTS AND DISCUSSION
The image fusion is performed on several set of
multifocus and multimodal images based on biorthogonal
wavelet transform using MATLAB. The proposed fusion
algorithm is tested with the help of qualitative and quantitative
assessment. In this paper, we have presented two sets of
representative images and corresponding fused images. The
performance of proposed algorithm is compared with
traditional spatial domain based image fusion methods like
Linear fusion [25], Principal Component Analysis based
fusion [15] and Sharpness criteria based image fusion [16]. All
the experiments were performed in two scenario: one, when
source images are free from any noise and other, when images
to be fused are corrupted with zero mean white gaussian noise.
(a) (b)
(b) (d)
(e) (f)
Fig. 2: Illustrations of fusion results for multifocus images. (a) hoed A, (b)
hoed B, (c) linear fused image, (d) PCA fused image, (e) Sharpness fused
image, (f) The Proposed method based fused image
In the first experiment, two multifocus images hoed_A
with blurred centre portion and hoed_B with blurred outer
portion are used. The fusion of these two images gives a better
visualization of the whole object. The fused images obtained
for experiment 1, by the proposed method and other methods
used in comparison are shown in Fig.2. Again, fusion of same
pair of images is performed by adding zero mean white
Proceedings of 2013 IEEE International Conference on Information and Communication Technologies (ICT 2013)
978-1-4673-5758-6/13/$31.00 © 2013 IEEE 760
Page 4
Gaussian noise with variance 0.01 to both input images, the
results in this case are shown in Fig.3.
(a) (b)
(b) (d)
(e) (f)
Fig. 3: Illustrations of fusion results for multifocus images in presence of zero
mean Gaussian noise with variance 0.01. (a) hoed A, (b) hoed B, (c) linear
fused image, (d) PCA fused image, (e) Sharpness fused image, (f) The Proposed method based fused image
(a) (b)
(c) (d)
(e) (f)
Fig. 4: Illustrations of fusion results for multimodal images. (a) CT image, (b)
MRI image, (c) linear fused image, (d) PCA fused image, (e) Sharpness fused image, (f) The Proposed method based fused image
(a) (b)
(c) (d)
(e) (f)
Fig.5: Illustrations of fusion results for multimodal images in
presence of zero mean Gaussian noise with variance 0.01. (a) CT image, (b) MRI image, (c) linear fused image, (d) PCA fused image, (e) Sharpness fused
image, (f) The Proposed method based fused image
Proceedings of 2013 IEEE International Conference on Information and Communication Technologies (ICT 2013)
978-1-4673-5758-6/13/$31.00 © 2013 IEEE 761
Page 5
Another representative fusion is on a pair of medical
images which consists of CT image with hard tissue and MRI
with soft tissue. In this pair of images, the fused image
combines all the relevant information without introducing any
artifacts in the fused images as shown in Fig. 4 and Fig.5.
For better analysis of resultant images, qualitative
performance measures are not sufficient and hence some
quantitative measures are needed. For the better analysis and
quality of image fusion, we have used three performance
measures as described in section 4.1. We compared our
method against other methods on the basis of measures Fusion
Factor, Information entropy and Standard deviation for both
multifocus and multimodal medical image pairs as shown in
Table 1.
Table 1: Comparative quantitative performance measures of fusion
results Fused
Image Method used
Fusion metric used
Fusion
Factor
Information
Entropy
Standard
Deviation
Hoed
Linear Fusion[25] 3.5487 7.1177 39.5803
PCA Fusion [15] 4.3801 7.6293 56.6486
Sharp Fusion [16] 4.1840 7.7594 61.4990
The proposed
method 4.4003 7.7746 61.7299
Medical
Image
Linear Fusion[25] 3.2281 5.5403 29.6808
PCA Fusion [15] 2.6305 5.6220 28.3806
Sharp Fusion [16] 2.5462 5.8097 30.9918
The proposed
method 4.7880 5.9985 33.0061
The robustness of the method is tested against the
zero mean white Gaussian noise and the quantitative
evaluations of fused images in presence of said noise is shown
in Table 2.
Table 2: Comparative quantitative performance measures of fusion
results in presence of zero mean Gaussian noise with variance 0.02 Fused
Image Method used
Fusion metric used
Fusion Factor
Information Entropy
Standard Deviation
Hoed
Linear Fusion[25] 2.6792 7.1675 38.1169
PCA Fusion [15] 3.8690 7.7570 57.8395
Sharp Fusion [16] 4.3419 7.6793 62.2867
The proposed
method 3.7086 7.8469 68.4181
Medical
Image
Linear Fusion[25] 2.7271 6.2981 29.2222
PCA Fusion [15] 3.3106 6.1834 33.8288
Sharp Fusion [16] 3.0802 5.4778 33.2127
The proposed
method 3.7328 6.6889 40.9239
VI. CONCLUSION
The usefulness of the proposed image fusion method is tested
on multifocus and multimodal images. For this, we have
presented two pair of images and their fusion results. The
results are also tested on two different conditions; when
images are free from any noise and when they are corrupted
with zero mean white Gaussian noise. Different set of images
with varying noise are fused using absolute maximum
selection fusion rule. On the bases of these experiments and
comparison with existing traditional image fusion methods,
we observed that the proposed method performs better in most
of the cases. The performance is measured on the basis of
qualitative and quantitative criteria. According to the results of
fusion, the biorthogonal wavelet transform produce better
results and more applicable because it can retain information
of individual image like edges, lines, curves, boundaries in the
fused image in better way. This is because of the linear phase
and symmetry properties of filters used in biorthogonal
wavelet transform. Also, experimental results show that the
multiscale wavelet decomposition is greatly effective for
image fusion, and therefore, the proposed image fusion
method presented in this paper is effective for different
multimodal and multifocus image fusion.
ACKNOWLEDGMENT
This work was supported in part by the Department of Science and Technology, New Delhi, India, under grant no. SR/FTP/ETA-023/2009 and
the University Grants Commission, New Delhi, India, under grant no. 36-
246/2008(SR).
REFERENCES
[1] D. A. Forsyth and J. Ponce, “Computer Vision – A Modern Approach”, PHI Publication, 2009.
[2] R. S. Blum and Z. Liu., “Multi-sensor image fusion and its
applications”, CRC Press, Taylor & Francis Group, 2006. [3] Yi Chai, Huafeng Li and Zhaofei Li, “ Multifocus image fusion scheme
using focused region detection and multiresolution,” Optics
Communications, Vol. 284, pp. 4376-4389, 2011. [4] G. Pajares and J. M. l de la Cruz, “A wavelet-based image fusion
tutorial”, Pattern Recognition, vol.37, pp.1855– 1872, 2004.
[5] R. Singh, R. Srivastava, O. Prakash and A. Khare, “Mixed scheme based multimodal medical image fusion using Daubechies complex
wavelet transform,” in proc. of IEEE/OSA/IAPR International
conference on Informatics, Electronics and Vision, Dhaka, pp. 304-309, 2012.
[6] R. Singh, R. Srivastava, O. Prakash and A. Khare, “Multimodal
medical image fusion in Dual tree complex wavelet domain using maximum and average fusion rules,” Journal of Medical Imaging and
Health Informatics, Vol. 2, No. 2, pp. 168-173, 2012.
[7] B. V. Darasthy, “Information fusion in the realm of medical applications – a bibliographic glimpse at its growing appeal”,
Information Fusion Vol. 13 pp. 1-9, 2012.
[8] C.Y. Wen, J.K. Chen, “Multi-resolution image fusion technique and its application to forensic science”, Forensic Science International,
vol.140, pp.217–232, 2004.
[9] P. Shah, S. N.Merchant and U. B.Desai, “Fusion of surveillance Images in Infrared and visible band using curvelet, wavelet and
wavelet packet transform ,”International Journal of Wavelets,
Multiresolution and Information Processing Vol. 8, No. 2, pp. 271–292, 2010.
[10] P. Shah, S. N.Merchant and U. B.Desai, “An efficient spatial domain
fusion scheme for multifocus images using statistical properties of neighborhood,” Multimedia and Expo (ICME), pp. 1-6, 2011.
[11] H. Chen, “A multiresolution Image Fusion based on Principle
Component Analysis,” Fourth International Conference on Image and Graphics, pp. 737-741, 2007.
[12] H. H. Wang, “A new multiwavelet-based approach to image fusion”,
Journal of Mathematical Imaging and Vision, vol.21, pp.177-192, 2004.
[13] J. J. Lewis, R. J. O'Callaghan, S. G. Nikolov, D. R. Bull and C. N. Canagarajah, “Region-based image fusion using complex wavelets”,
7th International Conference on Information Fusion (Fusion 2004),
International Society of Information Fusion (ISIF), Stockholm, pp. 555-562, 2004.
Proceedings of 2013 IEEE International Conference on Information and Communication Technologies (ICT 2013)
978-1-4673-5758-6/13/$31.00 © 2013 IEEE 762
Page 6
[14] R. Singh, R. Srivastava, O. Prakash and A. Khare, "DTCWT based
multimodal medical image fusion", in proc. of International conference on Signal, Image and Video processing, January 2012, pp. 403-407,
IIT Patna.
[15] V. P. S. Naidu and J. R. Raol, Pixel-level image fusion using wavelets and principal component analysis”, Defence Science Journal, Vol. 58,
No. 3, pp. 338-352, 2008.
[16] J. Tian, L. Chen, L. Ma and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion," Optics Communications,
Vol. 284, pp. 80-87, 2011.
[17] S. Daneshvar and H. Ghassemian, “MRI and PET image fusion by combining IHS and retina-inspired models”, Information Fusion Vol.
11, No. 2, pp. 114-123, 2010.
[18] G. Piella, “A general framework for multiresolution image fusion: from pixels to regions, Information Fusion,” Vol. 4, No. 4, pp. 259-280,
2003.
[19] N. Mitianoudis, T. Stathaki, “Pixel-based and Region-based Image Fusion schemes using ICA bases,” Special Issue on Image Fusion:
Advances in the State of the Art, Vol. 8, No. 2, pp. 131-142, 2007.
[20] Z. Yunfeng, Y. Yixin, F. Dongmei, Decision-level fusion of infrared
and visible images for face recognition, Control and Decision
Conference (CCDC), 2008, pp. 2411 – 2414.
[21] W. Sweldens, “The Lifting Scheme: A Construction of second
generation wavelets”, SIAM J. Math. Anal., 1997. [22] W. Sweldens, “The Lifting Scheme: A Custom Design construction of
Biorthogonal”, Wavelets Appl. Comput. Harmon. Anal., Vol. 3, 1996.
[23] A. Khare, U. S. Tiwary, W. Pedrycz, and M. Jeon, “Multilevel adaptive thresholding and shrinkage technique for denoising using Daubechies
complex wavelet transform”, The Imaging Science Journal, Vol. 58,
No.6, pp.340-358, 2010. [24] V. Petrovic and C. Xydeas, “Evaluation of image fusion performance
with visible differences”, Lecture Notes in Computer Science, vol.
3023, 2004. [25] J. G. P. W. Clevers, and R. Zurita-Milla, “Multisensor and
multiresolution image fusion using the linear mixing model”, in: T.
Stathaki (Eds.), Image Fusion: Algorithms and Applications, Academic Press, Elsevier, pp. 67-84, 2008.
[26] S. Li, B. Yang, and J. Hu, “Performance comparison of different multi-
resolution transforms for image fusion”, Information Fusion, Vol. 12, pp. 74-84, 2011.
[27] M. Deshmukh and U. Bhosale, “Image fusion and image quality
assessment of fused images”, Int J Image Process, Vol. 4, pp. 484–508, 2010.
Proceedings of 2013 IEEE International Conference on Information and Communication Technologies (ICT 2013)
978-1-4673-5758-6/13/$31.00 © 2013 IEEE 763