Top Banner
IMAGE QUALITY METRICS BASED MULTI-FOCUS IMAGE FUSION Amina Saleem 1 , Azeddine Beghdadi 1 and Boualem Boashash 2 1. L2TI-Institute Galilee, Universite Paris 13,Villetaneuse, France 2. Qatar University College of Engineering,Doha Qatar. Adjunct Professor University of Queensland,Brisbane, Australia ABSTRACT We describe an innovative methodology for block based mul- tifocus image fusion in the spatial domain. The main idea is to use blind or no reference blur metrics for image fusion ap- plications. Two blur quality metrics based on edge content information and radon wigner ville based mean directional entropy, developed in the paper are used as activity measures. Fusion is performed by calculating blur based activity mea- sure for each sub window in the source images and using maximum selection criterion according to the magnitude of these metrics. Furthermore, the defined metrics can be used for quantitative performance assessment of multifiocus fused images. The results of the proposed algorithm are compared to some existing image fusion methods. Index TermsImage fusion, Image quality, Perfor- mance evaluation 1. INTRODUCTION The aim of image fusion algorithms is to combine comple- mentary and redundant information from multiple images (from same or different sensors) to generate a composite that gives better description of the scene than any of the individual source images. Image fusion is recognized as a useful tool for improving overall system performance in image-based ap- plication areas such as defense surveillance, remote sensing, medical imaging, machine vision and biometrics. Image fu- sion methods are classified on the basis of the different stages at which fusion takes place [2].They are broadly categorized as pixel level, feature level and decision level fusion [1, 2]. Pixel level fusion involves integration of low level informa- tion like intensity, where the information at each pixel in the fused image is determined from a set of pixels in the source images. Feature level fusion requires the extraction of salient features such as edges or textures. These features from the source images are then fused to get the composite image. Decision level is a higher level of fusion where input images are processed individually for information extraction and the information combined by applying decision rules to rein- force common interpretation. In imaging applications where images are captured by CCD devices, we get images of the same scene which are not in-focus everywhere (if one object is in-focus, another one will be out of focus). This occurs due to sensors which cannot generate images at various distances with equal sharpness. One way to overcome this problem is to take different in-focus parts of a scene and merge them into a composite image which has all parts in focus. This is also effective for digital camera design and industrial inspec- tion applications where the need to visualize objects at very short distances complicates the preservation of the depth of field. Many techniques for multi-focus image fusion prob- lem have been proposed in literature. Wavelet transform [7], wavelet transform contrast [8], contourlet transform [9] and Haar transform [10] are some of the transform based methods that are used for multi-focus fusion. In [5] ratio of blurred and original image intensities is used to guide the fusion process. A multi-focus image fusion method using spatial frequency (SF) and morphological operators is proposed in [8]. In [11] the application of artificial neural networks to pixel level multi-focus image fusion problem is presented. Most of the image fusion techniques use pixel based meth- ods [3,4].The advantage of the pixel level fusion methods is that the composite image contains original information. The spatial methods as compared to the transform based methods in general, are computationally less extensive and undergo lesser information loss as no transforms and inverse trans- formations are involved. In this work, we propose a block based multi-focus image fusion algorithm using the no ref- erence blur quality metrics. Indeed,the previous methods are based on indirect measures (like spatial frequency visibility and transform coefficients) to estimate the sharpness or con- trast of the multi-focus images. We introduce image quality metrics that directly measure the sharpness or contrast of the source images to yield a fused output. The basic idea is to divide the source images into blocks, and then select the corresponding block with lesser blur measured by the metrics defined in the paper. The performance assessment of the image fusion algorithms is a difficult task due to a non availability of ground truth. However, we can define metrics for a specific image fusion task. We further show that the blur metrics can be used as a quantitative measure for perfor- mance assessment of multi-focus fused images. The paper is organized as follows. The following section introduces 978-1-4577-0071-2/11/$26.00 ©2011 IEEE 77 EUVIP 2011
6

Image quality metrics based multifocus image fusion

Apr 27, 2023

Download

Documents

Samir Farhat
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Image quality metrics based multifocus image fusion

IMAGE QUALITY METRICS BASED MULTI-FOCUS IMAGE FUSION

Amina Saleem 1, Azeddine Beghdadi 1 and Boualem Boashash 2

1. L2TI-Institute Galilee, Universite Paris 13,Villetaneuse, France2. Qatar University College of Engineering,Doha Qatar.

Adjunct Professor University of Queensland,Brisbane, Australia

ABSTRACT

We describe an innovative methodology for block based mul-tifocus image fusion in the spatial domain. The main idea isto use blind or no reference blur metrics for image fusion ap-plications. Two blur quality metrics based on edge contentinformation and radon wigner ville based mean directionalentropy, developed in the paper are used as activity measures.Fusion is performed by calculating blur based activity mea-sure for each sub window in the source images and usingmaximum selection criterion according to the magnitude ofthese metrics. Furthermore, the defined metrics can be usedfor quantitative performance assessment of multifiocus fusedimages. The results of the proposed algorithm are comparedto some existing image fusion methods.

Index Terms— Image fusion, Image quality, Perfor-mance evaluation

1. INTRODUCTION

The aim of image fusion algorithms is to combine comple-mentary and redundant information from multiple images(from same or different sensors) to generate a composite thatgives better description of the scene than any of the individualsource images. Image fusion is recognized as a useful tool forimproving overall system performance in image-based ap-plication areas such as defense surveillance, remote sensing,medical imaging, machine vision and biometrics. Image fu-sion methods are classified on the basis of the different stagesat which fusion takes place [2].They are broadly categorizedas pixel level, feature level and decision level fusion [1, 2].Pixel level fusion involves integration of low level informa-tion like intensity, where the information at each pixel in thefused image is determined from a set of pixels in the sourceimages. Feature level fusion requires the extraction of salientfeatures such as edges or textures. These features from thesource images are then fused to get the composite image.Decision level is a higher level of fusion where input imagesare processed individually for information extraction and theinformation combined by applying decision rules to rein-force common interpretation. In imaging applications whereimages are captured by CCD devices, we get images of the

same scene which are not in-focus everywhere (if one objectis in-focus, another one will be out of focus). This occurs dueto sensors which cannot generate images at various distanceswith equal sharpness. One way to overcome this problem isto take different in-focus parts of a scene and merge theminto a composite image which has all parts in focus. This isalso effective for digital camera design and industrial inspec-tion applications where the need to visualize objects at veryshort distances complicates the preservation of the depth offield. Many techniques for multi-focus image fusion prob-lem have been proposed in literature. Wavelet transform [7],wavelet transform contrast [8], contourlet transform [9] andHaar transform [10] are some of the transform based methodsthat are used for multi-focus fusion. In [5] ratio of blurredand original image intensities is used to guide the fusionprocess. A multi-focus image fusion method using spatialfrequency (SF) and morphological operators is proposed in[8]. In [11] the application of artificial neural networks topixel level multi-focus image fusion problem is presented.Most of the image fusion techniques use pixel based meth-ods [3,4].The advantage of the pixel level fusion methods isthat the composite image contains original information. Thespatial methods as compared to the transform based methodsin general, are computationally less extensive and undergolesser information loss as no transforms and inverse trans-formations are involved. In this work, we propose a blockbased multi-focus image fusion algorithm using the no ref-erence blur quality metrics. Indeed,the previous methods arebased on indirect measures (like spatial frequency visibilityand transform coefficients) to estimate the sharpness or con-trast of the multi-focus images. We introduce image qualitymetrics that directly measure the sharpness or contrast ofthe source images to yield a fused output. The basic ideais to divide the source images into blocks, and then selectthe corresponding block with lesser blur measured by themetrics defined in the paper. The performance assessment ofthe image fusion algorithms is a difficult task due to a nonavailability of ground truth. However, we can define metricsfor a specific image fusion task. We further show that theblur metrics can be used as a quantitative measure for perfor-mance assessment of multi-focus fused images. The paperis organized as follows. The following section introduces

978-1-4577-0071-2/11/$26.00 ©2011 IEEE 77 EUVIP 2011

Page 2: Image quality metrics based multifocus image fusion

briefly the blur quality metrics. Section 3 describes the multi-focus image fusion framework. The next section is devoted tothe performance evaluation of the multi-focus image fusionapplications in general and the proposed method in specific.The results are discussed in section. 5. Finally, we concludeour work in section.6.

2. BLUR QUALITY METRICS

Different contrast structures in images provide information onsharp edges and texture components. In [12] A Turiel and NParga present a formalism that results in a natural hierarchi-cal description of the different contrast structures in images,from textures to sharp edges. A quantity, the edge contentis defined that accumulates all the contrast changes whatevertheir strength inside a scale 𝑟. Using the edge content infor-mation, we define a metric to measure the blur degradation ofimages in a non reference scenario i.e. using this metric wecan sort images for selecting the best among a set of blurredimages. In [13] blind image quality metric based on mea-suring image anisotropy is proposed. It is shown that imageanisotropy is sensitive to noise and blur hence quality can bemeasured in this way. The variance of the expected entropyis measured as a function of the directionality, and taken asan anisotropy indicator. In our work, we have used the meanrenyi entropy of radon wigner ville transformed image as anindicator of the blur in images. Based on experiments we hy-pothesize that blur in the image reduces the edge content 𝐸𝐶and the mean directional renyi entropy. Following this lineof reasoning, we extend the concept of 𝐸𝐶 and anisotropy tobe used as activity measures for the multi-focus image fusionproblem.

2.1. Edge content

Image-processing techniques emphasize that edges are themost prominent structures in a scene as they cannot be pre-dicted easily. Changes in contrast are relevant because theyoccur at the most informative pixels of the scene [12]. In [12]precise definitions of edges and other texture components ofimages were proposed. Contrast changes are defined by aquantity the edge content EC that accumulates all of them,whatever their strength, contained inside a scale 𝑟, that is:

𝐸𝐶 = 𝜖𝑟(→𝑥) =

1

𝑟2

𝑥1+𝑟2∫

𝑥1− 𝑟2

𝑑𝑥′1

∫ 𝑥2+𝑟2

𝑥2− 𝑟2

𝑑𝑥′2

∣∣∣∣∇𝐶(−→𝑥

′)

∣∣∣∣The discrete formulation is represented by the following ex-pression:

𝐸𝐶 =1

(𝑚× 𝑛)∑𝑥2

∑𝑥1

∣∣∣∇𝐶(→𝑥)∣∣∣

where 𝑚 and 𝑛 represents the size of the image block forwhich we calculate 𝐸𝐶 and 1 ≤ 𝑥1 ≤ 𝑚 and 1 ≤ 𝑥2 ≤ 𝑛.

The contrast 𝐶(→𝑥) is taken as 𝐶(

→𝑥) = 𝐼(

→𝑥)− 𝐼 where 𝐼(

→𝑥)

is the field of luminosities and ⟨𝐼⟩ is the average value acrossthe ensemble. The bi-dimensional integral on the right-handside, defined on the set of pixels contained in a square of linearsize 𝑟, is a measure of that square. It is divided by the factor𝑟2, which is the Lebesque measure (denoted by 𝜆) of a squareof linear size 𝑟. The quantity 𝜖𝑟(

→𝑥) can then be regarded

as the ratio between these two measures. More generally, wedefine the measure 𝜇(𝐴) , the edge measure𝐸𝑀 of the subset𝐴 as:

𝐸𝑀 = 𝜇(𝐴) =

∫𝐴

𝑑−→𝑥′

∣∣∣∣∇𝐶(−→𝑥

′)

∣∣∣∣where

∫𝐴

𝑑−→𝑥′ is a bidimensional integration over the set 𝐴 . It

is also possible to generalize the definition of 𝜖𝑟 for any subset𝐴 as:

𝜖𝐴 =𝜇(𝐴)

𝜆(𝐴)=

∫𝐴

𝑑−→𝑥′

∣∣∣∣∇𝐶(−→𝑥

′)

∣∣∣∣𝜆(𝐴)

Contrast changes are distributed over the images in such away that 𝐸𝐶 has large contributions even from pixels that arevery close together. The 𝐸𝐶 increases with decreasing blur.The results of the quality metric are tested by applying it to aset of randomly selected natural images as proposed in [13].One set of natural images that have been progressively de-graded by the addition of blur are taken. The value of 𝐸𝐶 forthis image set is plotted in Figure 1 (b). We can see from Fig-ure 1 (b) that the value decreases with increasing blur. Simi-lar results are observed for a number of other natural imagesand images from the LIVE database [21] (not included here).This metric is also tested for full reference performance of theringing and blur degradations giving high correlations.

2.2. Radon wigner ville based blur metric

In this paper we also develop another methodology based onthe anisotropy for blind assessment of blur in images. To getthe directional information of the images radon profiles aretaken ten degrees apart. It has been experimentally validatedin [14] that profiles taken ten degrees apart retain all the im-portant information in an image. The one dimensional wignerville [18] gives the time frequency distribution correspondingto each radon profile. The renyi entropy with the parame-ter 𝛼 set to 3 is used to determine the directional informa-tion in each WVD transformed radon profile. The expectedor the mean value of these directional entropies is used as ametric for the blind assessment of noise. Let 𝑅𝐼(𝐼)(𝛼, 𝑠) bethe radon transform of the image 𝐼(

→𝑥) and 𝑅𝐼(𝐼)(𝛼𝑖, 𝑠𝑖) the

radon profiles corresponding to the angles 𝜃 = [𝜃1, 𝜃2......𝜃𝑖].The renyi entropy for each radon profile be denoted by 𝑅3(𝑖)The image quality metric is then mathematically defined by:

−𝑅 =

∑𝑖

𝑅3(𝑖)

𝑝

78

Page 3: Image quality metrics based multifocus image fusion

where 𝑝 represents the total number of image radon profiles.The discrete wigner ville distribution 𝑊𝑧(𝑛, 𝑘) of a function𝑧(𝑛) and the third order renyi information applied to a time-frequency distribution 𝑊𝑧(𝑛, 𝑘) is given by

𝑊𝑧(𝑛, 𝑘) = 2𝐷𝐹𝑇{𝑚→𝑘

𝑧[𝑛+𝑚]𝑧∗[𝑛−𝑚]} 𝑚 ∈ ⟨𝑁⟩

where ⟨𝑁⟩ means any set of 𝑁 consecutive integers and

𝑅3 = −1

2log2(

∑𝑛

∑𝑘

𝑊 3𝑧 (𝑛, 𝑘))

Third order entropy is used to avoid problems caused by thepresence of the oscillatory cross terms which are ignored bythe renyi entropy based measure for 𝛼 = 3. The value of the

metric−𝑅 decreases with increasing blur as validated by exper-

iments and simulations. The results are validated by applyingthem to a set of blur degraded images in the LIVE database[21] to test if the metric correlates with human judgment. Theblur metric for the selected images from the LIVE databaseare given in Table 1. The metric is also compared with somewell known image quality metrics, the PSNR and SSIM. Theproposed metric gives a good match with human judgmenttherefore it can be successfully used for sorting blurred im-ages according to their visual quality for image processingapplications where no reference or ground truth is available.Such no reference or blind metrics are therefore particu-larly suitable for image fusion and enhancement applicationswhere there is an absence of ground truth. The calculation ofthis metric involves transformation to the frequency domainusing wigner ville transform. However, the complexity of thismetric is reduced by applying a 1D transform and taking fewradon profiles (utilizing the characteristics of the HVS).

3. MULTI-FOCUS IMAGE FUSION

Image fusion starts by dividing the source images into subblocks and then calculating a quantity (activity measure)which measures the saliency of each sub block. The edgecontent activity measure is calculated for each sub block andthe value of 𝑟 is the area of the sub block. This is followedby applying some fusion rules to combine the images to geta composite or fused output. The activity measure of eachregion in the source images depends on the nature of sourcesas well as the particular fusion application. For the multi-focus image fusion problem a desirable activity level wouldbe a quantitative metric which increases when image featuresare more in focus. This justifies the use of the edge contentand the mean radon wigner ville based entropy measures asgood activity measure candidates for multi-focus image fu-sion applications. The next step in the fusion process is todefine the decision and combination maps. The combinationmodule performs the actual combination of the images which

is dictated by the decision map. The construction of the de-cision map is a key point in the approach because its output𝑑𝑘 governs the combination map 𝐶𝑘. Although the fusion al-gorithm can be extended for applications with more than twosource images, we consider the fusion of two source imagesfor simplicity. The composite image is assembled from thesource images for each region as

𝑦𝑘𝐹 (.) = 𝐶𝑘(𝑦𝑘𝐴(.), 𝑦𝑘𝐵(.), 𝑑

𝑘(.))

where 𝑦𝐹 , 𝑦𝐴 and 𝑦𝐵 stand for the composite and source im-ages respectively and 𝐶𝑘 is the combination map for region𝑘. A simple choice for 𝐶𝑘 can be a linear mapping

𝐶𝑘(𝑦𝐴, 𝑦𝐵 , 𝛿) = 𝑤𝐴(𝛿)𝑦1 + 𝑤𝐵(𝛿)𝑦2

where the weights 𝑤𝐴(𝛿), 𝑤𝐵(𝛿) depend on the decision pa-rameter. For our application, the decision map decides thatblock with the most salient activity measure is the best choicefor the composite output image and tells the combination mapto select it.

𝑤𝑆(𝑑𝑘(.)) =

{1 if 𝑆 = argmax𝑠𝐸𝐶

𝑘

0 otherwise

where 𝑆 is the index set of the source images. Thus we use aselective rule that picks the most salient component, i.e. theone with largest activity. After applying the combination mapwe get:

𝑦𝑘𝐹 (.) = 𝑦𝑘𝑀 (.) where 𝑀 = argmax𝑠𝐸𝐶𝑘 or 𝑀 = argmax

𝑠

−𝑅

𝑘

𝐸𝐶 is the edge content and−𝑅 the mean renyi entropy used as

activity measures.

4. PERFORMANCE METRIC FOR MULTI-FOCUSIMAGE FUSION PROBLEM

The performance of most fused images is evaluated subjec-tively. The subjective methods for image quality assessmentare known to be complex, time consuming and expensive.Objective assessment is difficult as it requires the availabil-ity of ground truth. Various image fusion algorithms in liter-ature [15] are evaluated objectively by constructing an idealoutput image by a manual ‘cut and paste’ process [16, 17].Mean square error is also widely used for the evaluation. Mu-tual information is used in [17] as a no reference measure toevaluate the performance of image fusion algorithms. Xydeasand Petrovic [19] proposed a metric based on relative amountof edge information transferred from the source to the fusedimage. A similarity based image metric for image fusion isdefined in [20]. Although the performance assessment for im-age fusion problem is a difficult task, it can be simplified bydefining parameters for particular image fusion tasks. Thisamounts to defining reduced reference image quality metrics

79

Page 4: Image quality metrics based multifocus image fusion

where we have certain information about the source imagesand the application. Based on this idea we propose that theblur quality metrics defined in our work can be used as quan-titative metrics for the performance of a particular image fu-sion problem. We calculate the edge content and the radonwigner ville based mean directional entropy to evaluate theperformance of our proposed algorithm.

5. RESULTS AND DISCUSSION

In this section we present some examples to illustrate the pro-posed method. We test our method for a number of multi-focus pair of images getting similar results. Two examples aregiven here to illustrate the fusion process described above. Inthe first example of clock image we use 𝐸𝐶 and the secondexample of cup image the radon wigner ville metric is usedas activity measure. Figure 3 shows two out of focus clockand cup images, respectively, and their corresponding com-posite images. The performance of these methods based onthe defined metrics is given in Table 2. From the experiments,we see that the proposed fusion method gives accurate resultsas it is based on selection of in focus blocks from the inputsources thus transferring relevant information from the sourceto composite image without loss of information as the fusionis performed in the spatial and not in the transform domain.Next, we present a comparison of our method to some existingmultiresolution approaches, Burt’s method [23] comprisinga laplacian pyramid decomposition, steerable pyramid withBurt and Kolczynski’s combination algorithm [24] and thelifting scheme based MR decomposition on a quincunx lat-tice [25]. These algorithms are implemented using the MAT-IFUS toolbox for image fusion [22]. Some other averagingand weighted averaging methodologies using PCA are alsoproposed in literature. However, averaging produces a loss ofcontrast therefore comparisons to averaging methods are notpresented in this work. The composite images obtained by theproposed method, steerable pyramid method, quincunx liftingscheme and laplacian pyramid are shown in figure 2. Figure2 shows that the 𝐸𝐶 based fusion method outperforms otherMR based fusion schemes followed by the laplacian pyramidbased MR method. Note, for instance the loss of contrastand blurring introduced in the edges in figure 2 b and c forthe steerable pyramid and the quincunx lifting schemes. Wecan also observe a loss of contrast for the laplacian based MRresolution methodology. This can be due to the averaging per-formed for the approximation coefficients for the MR basedfusion methods. Some general requirements for image fusionmethods are no loss of salient information and no artifactsintroduced by the fusion process. The proposed method re-tains effectively the objectives of image fusion [26] describedabove i.e. no information is lost as the method involves theselection of in focus blocks from the source images. There isno loss of information introduced as smoothing of edges andreduction in contrast for the averaging and transform based

fusion methodologies. It is implemented in the space domainso it is computationally much less extensive as compared toother transform based fusion methods. The computation ofthe blur quality based metric involves computation of the 1DPWVD for radon profiles in an image. However, it needs to beemphasized that this transformation is to compute the metric.However the fusion is performed in the space domain so thecomplexity of our proposed algorithm is less than the trans-form based methods as they involve forward and reverse 2Dtransforms. The performance of the proposed method is how-ever sensitive to block size. There is some loss of informationif the block size is large such that there is an overlap of somefocus and out of focus regions of the source images. More-over, the radon wigner ville based blur metric is effective forlarger block size because if the block size is too small the en-tropy results are not meaningful. The radon based metric ismore useful for complementary blurred images. The imagesare complementary in the sense that the blurring occurs at theleft half and the right half, respectively. In this case we canhave large block sizes and thus get meaningful entropy values.

6. CONCLUSION

A novel block based multi-focus image fusion method is pre-sented in this work. The idea developed is to utilize blurimage quality metrics to guide the multi-focus image fusionproblem in the spatial domain. There is little or no loss of in-formation as no transformations and inverse transformationsor averaging operations are involved. The proposed methodis computationally less extensive as it doesn’t involve forwardand inverse image transformations. Moreover the blur met-rics proposed here can be used as quantitative measure for themulti-focus image fusion problem as well as for blind sortingof blurred images (according to visual quality) for other im-age processing applications. The proposed method providessuperior results to some existing algorithms as validated byexperiments. However, further investigation on the automatic

(a) (b)

Fig. 1. (a) Original image (b) EC of ten images of the pro-gressively blurred test set in (a)

80

Page 5: Image quality metrics based multifocus image fusion

(a) (b)

(c) (d)

Fig. 2. Examples of fusion by some existing methods (a) ourmethod (b) Steerable pyramid method (c) Quincunx liftingscheme (d) Laplacian pyramid

selection of the block size needs to be performed.

Coins Index Dancers Index#146 (0.00) 1.0000 #149 (0.00) 1.0000# 32 (0.56) 0.9520 # 5 (0.56) 0.9362

# 101 (0.84) 0.8840 # 21 (0.90) 0.8167# 22 (1.45) 0.6213 # 93 (0.96) 0.7869# 19 (1.70) 0.4862 # 128 (1.47) 0.4884#123 (2.51) 0.0000 # 41 (2.16) 1.0000

Table 1. ”Coins,” ”Dancers, images taken from the LIVEdatabase for blur degradation.The bracket next to the imagenumber contains the standard deviation of the Gaussian kernelfor each image.

Images Metric EC Images Metric RClock img 1 6.44 Cup img 1 0.03Clock img 2 4.62 Cup img 2 0.02Fused img 7.13 Fused img 0.04

Table 2. Value of proposed metrics for the cup and the clockimages.

7. REFRENCES

[1] R. C. Luo and M. G. Kay “Multisensor Integration andFusion for Intelligent Machines and Systems,”Norwood,

(a) (b)

(c) (d)

(e) (f)

Fig. 3. (a,b) Multifocus clock source images (c) Fused imageusing the mean directional entropy metric R (d,e) Multifocuscup source images (f) Fused image using EC

Image Metric EC Image Metric ECOrig 16.3273 Img 5 7.5523img 1 9.7140 Img 6 7.3575Img 2 8.6574 Img 7 7.2009Img 3 8.1476 Img 8 7.0717Img 4 7.8036 Img 9 6.9622

Table 3. Table of Images and the Blur metric EC.

NJ, USA Ablex Publishing Corporation,1995, ISBN 0-89391-863-6.

[2] C. Pohl and J. L. Genderen, “Multisensor image fu-sion in remote sensing: concepts, methods and appli-cations,” International Journal of Remote Sensing , No5, pp 823-854, 1998.

[3] Z.H. Li, Z.L Jing, G. Liu, S.Y. Sun, H. Leung, “Pixelvisibility based multifocus image fusion,”in Proceeding

81

Page 6: Image quality metrics based multifocus image fusion

of the International Conference on Neural Networksand Signal Processing, 2003 pp 1050 - 1053.

[4] S. Li , J. T. Kwok and Y. Wanga, “Combination of im-ages with diverse focuses using the spatial frequency,”Information Fusion ,vol 2, Issue 3, pp 169-176, Sep2001.

[5] M. Qiguang, W. Baoshu, R. Z. Schowengerdt, A.Robert. “Multi-focus image fusion using ratio of blurredand original image intensities,” in Proceedings ofSPIE, the International Society for Optical Engineer-ing.Visual information processing XIV: (29-30 March2005, Orlando, Florida,USA).

[6] B.Yang and S. Li ,“Multi-focus image fusion based onspatial frequency and morphological operators,” Chi-nese Optics Letters, vol. 5, Issue 8, pp. 452-453.

[7] W. Wang, P. Shui, G. Song, “Multifocus image fusionin wavelet domain,” International Conference on Ma-chine Learning and Cybernetics, 2003, vol 5, pp 2887-2890, Nov. 2003.

[8] D. Lu, L. Wang, and J. Lv (PRC) “New Multi-FocusImage Fusion Scheme based on Wavelet Contrast,”From Proceeding (534) Signal and Image Processing ,2006.

[9] L.Yang, B.Guo, W.Ni, “Multifocus Image Fusion Al-gorithm Based on Contourlet Decomposition and Re-gion Statistics,” Fourth International Conference on Im-age and Graphics (ICIG) pp. 707-712, 2007, Chengdu,Sichuan, China.

[10] Toxqui-Quitl, Carina, Padilla-Vivanco, Alfonso, Urcid-Serrano, Gonzalo,“Mutlifocus image fusion using theHaar wavelet transform,” Applications of Digital ImageProcessing XXVII, Proceedings of the SPIE, vol 5558, pp.796-803, Aug 2004, USA.

[11] S.Li, J.T. Kwok, Y.Wang “Multifocus image fusion us-ing artificial neural networks” Pattern Recognition Let-ters, vol. 23, Issue 8, pp. 985 -997, 2002.

[12] A. Turiel, N. Parga ,“The Multifractal Structure of Con-trast Changes in Natural Images: From Sharp Edgesto Textures,”Journal of Neural Computation archive,vol.12 , Issue 4, pp. 763-793 April 2000.

[13] S. Gabarda and G. Cristobal, “Blind image quality as-sessment through anisotropy,” JOSA A, vol. 24, Issue12, pp. B42-B51, 2007.

[14] A. Saleem, A. Beghdadi, A. Chetouani, B. Boashash,“A Radon Wigner Ville Based Image DissimilarityMeasure”, IEEE Symposium on Computational Intel-ligence for Multimedia, CIMSIVP 2011, Paris, April11-15, 2011.

[15] G. Piella, “A general framework for multiresolution im-age fusion: from pixels to regions”, Information Fu-sion, vol.4, Issue 4, pp 255-280 (Elsevier), 2003.

[16] H. Li, B. S. Manjunath, and S. K. Mitra,“ Multisensorimage fusion using the wavelet transform”, GraphicalModels and Image Processing,vol. 57, No. 3, pp. 235-245, 1995.

[17] O. Rockinger, “Image sequence fusion using a shiftinvariant wavelet transform,” in Proc. IEEE Interna-tional Conference on Image Processing,Washington,DC, pp.288-291, 1997.

[18] B. Boashash, “Time-Frequency Signal Analysis andProcessing: A comprehensive Reference, Elsevier,”Science, Oxford, 2003.

[19] G. H. Qu, D. L. Zhang, and P. F. Yan “Information mea-sure for performance of image fusion”, Electronics Let-ters, vol. 38, Issue 7 pp 313-315, 2002.

[20] Z. Wang and A. C. Bovik. “A universal image qualityindex” IEEE Signal Processing Letters, vol. 9, No. 3,pp. 81-84, 2002.

[21] http://live.ece.utexas.edu/research/quality/

[22] http://homepages.cwi.nl/˜pauldz/index.html

[23] P.J.Burt. “The pyramid as a structure for efficient com-putation,” In Multiresolution Image Processing andAnalysis, A. Rosenfeld, Ed. Springer-Verlag, Berlin,1984.

[24] P. J. Burt., and , R. J. Kolczynski. “Enhanced imagecapture through fusion,” in Proceedings of the 4th Inter-national Conference on Computer Vision, .pp 173-182,Berlin, Germany, May 1993.

[25] H .J. A. M Heijmans, J. Goutsias, “Nonlinear multires-olution signal decomposition schemes,” Part II: Mor-phological wavelets. IEEE Transactions on Image Pro-cessing, vol 9, Issue 11, pp 1897-1913,2000.

[26] O. Rockinger “Pixel-level fusion of image sequencesusing wavelet frames,” in Proceedings of the 16thLeeds Annual Statistical Research Workshop , LeedsUniversity Press, pp. 149-154, 1996.

[27] J. A. Richard.“Thematic mapping from multitemporalimage data using the principal component transforma-tion” Remote Sensing of Environment, vol 16, Issue 1,pp 36-46, 1984.

82