Top Banner
Medical Image Fusion Based on Redundancy DWT and Mamdani Type Min-sum Mean-of-max Techniques with Quantitative Analysis Chandra Prakash SCSE VIT University, Vellore [email protected] S Rajkumar SCSE VIT University, Vellore [email protected] P.V.S.S.R. Chandra Mouli SCSE VIT University, Vellore [email protected] Abstract— Image fusion is basically a process where multiple images (more than one) are combined to form a single resultant fused image. This fused image is more productive as compared to its original input images. The fusion technique in medical images is useful for resourceful disease diagnosis purpose. This paper illustrates different multimodality medical image fusion techniques and their results assessed with various quantitative metrics. Firstly two registered images CT (anatomical information) and MRI-T2 (functional information) are taken as input. Then the fusion techniques are applied onto the input images such as Mamdani type minimum-sum-mean of maximum (MIN-SUM-MOM) and Redundancy Discrete Wavelet Transform (RDWT) and the resultant fused image is analyzed with quantitative metrics namely Over all Cross Entropy(OCE), Peak Signal –to- Noise Ratio (PSNR), Signal to Noise Ratio (SNR), Structural Similarity Index(SSIM), Mutual Information(MI). From the derived results it is inferred that Mamdani type MIN-SUM-MOM is more productive than RDWT and also the proposed fusion techniques provide more information compared to the input images as justified by all the metrics. KeywordsCT, MRI-T2, MIN-SUM-MOM algorithm, Redundancy Discrete Wavelet Transform, medical image fusion, Quantitative Metrics, registered images I. INTRODUCTION Medical imaging field demands images with high resolution and having high information content in context with bones, tissues and visualization for necessary disease diagnosis [10]. This is not possible using single modality medical images as X-ray computed tomography (CT) is suited only for recognizing bones structure, MRI giving clear information about the soft tissues and so on. In practical scenario we require complementary information from different modalities for disease diagnosis purpose. In this regard, medical image fusion is the only emerging technique which has attracted researchers to assist the doctors in fusing images and retrieving relevant information from multiple modalities such as CT, MRI, FMRI, SPET, PET etc. In the area of medical image fusion, there are several fusion techniques, but these techniques have certain limitations [1]. For example, Contrast Pyramid could not retain sufficient information from its source images, Ratio Pyramid method suffers by providing false information which never existed in the original images, Morphological Pyramid creates many false edges. Therefore multimodality medical image fusion has emerged as a promising research area in the recent years. Image fusion basically aims at integrating information from the source images on pixel basis thus obtaining a more precise and complete information about an object. The actual fusion process can be carried out at various levels. Under this, in the pixel-level image fusion the fused images provided all relevant information present in original images with no artifacts or inconsistencies. The pixel-level image fusion were classified into spatial domain fusion and transform domain fusion. Spatial domain fusion is directly applied on the source images which in turn reduce the signal- to-noise ratio of the resultant image with simple averaging technique but the spatial distortion still persists in the fused image. To improve on that in transform domain fusion, firstly the input images are decomposed based on transform coefficients. Then the fusion technique is applied and the fusion decision map is obtained. Inverse transformation on this decision map yields the fused image. The fused image carries all the details of the source images and reduces the spatial distortion. So, majority of the earlier fusion techniques were based on wavelet transformation. In wavelet transformation, the DWT technique resulted in an image with shift variance and additive noise [6]. These problems can be overcome using RDWT and Mamdani type MIN-SUM-MOM fusion technique. RDWT technique preserves the overall information (exact edge and spectral) from the input images without any loss of spatial distortion. Here, the high pass and low pass sub bands of the input images are fused using average method and entropy method respectively. The Mamdami type MIN-SUM-MOM technique basically utilizes fuzzy implication operation for the MIN algorithm, calculating the membership degree of the derived output fuzzy sets uses the Sum algorithm and the Defuzzification of the output set uses MOM algorithm [3]. The performances of different fusion techniques are evaluated using different quantitative metrics namely OCE, PSNR, SNR, SSIM and MI. The rest of the sections are organized in this paper as follows: In section-II, the system design is discussed briefly; 54 978-1-4673-0255-5/12/$31.00 c 2012 IEEE
6

Medical Image Fusion Based on Redundancy DWT

Oct 26, 2014

Download

Documents

sriramperumal
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Medical Image Fusion Based on Redundancy DWT

Medical Image Fusion Based on Redundancy DWT and Mamdani Type Min-sum Mean-of-max

Techniques with Quantitative Analysis

Chandra Prakash SCSE

VIT University, Vellore [email protected]

S Rajkumar SCSE VIT University, Vellore

[email protected]

P.V.S.S.R. Chandra Mouli SCSE

VIT University, Vellore [email protected]

Abstract— Image fusion is basically a process where multiple images (more than one) are combined to form a single resultant fused image. This fused image is more productive as compared to its original input images. The fusion technique in medical images is useful for resourceful disease diagnosis purpose. This paper illustrates different multimodality medical image fusion techniques and their results assessed with various quantitative metrics. Firstly two registered images CT (anatomical information) and MRI-T2 (functional information) are taken as input. Then the fusion techniques are applied onto the input images such as Mamdani type minimum-sum-mean of maximum (MIN-SUM-MOM) and Redundancy Discrete Wavelet Transform (RDWT) and the resultant fused image is analyzed with quantitative metrics namely Over all Cross Entropy(OCE), Peak Signal –to- Noise Ratio (PSNR), Signal to Noise Ratio (SNR), Structural Similarity Index(SSIM), Mutual Information(MI). From the derived results it is inferred that Mamdani type MIN-SUM-MOM is more productive than RDWT and also the proposed fusion techniques provide more information compared to the input images as justified by all the metrics.

Keywords— CT, MRI-T2, MIN-SUM-MOM algorithm, Redundancy Discrete Wavelet Transform, medical image fusion, Quantitative Metrics, registered images

I. INTRODUCTION

Medical imaging field demands images with high resolution and having high information content in context with bones, tissues and visualization for necessary disease diagnosis [10]. This is not possible using single modality medical images as X-ray computed tomography (CT) is suited only for recognizing bones structure, MRI giving clear information about the soft tissues and so on. In practical scenario we require complementary information from different modalities for disease diagnosis purpose. In this regard, medical image fusion is the only emerging technique which has attracted researchers to assist the doctors in fusing images and retrieving relevant information from multiple modalities such as CT, MRI, FMRI, SPET, PET etc.

In the area of medical image fusion, there are several fusion techniques, but these techniques have certain limitations [1]. For example, Contrast Pyramid could not retain sufficient information from its source images, Ratio Pyramid method

suffers by providing false information which never existed in the original images, Morphological Pyramid creates many false edges. Therefore multimodality medical image fusion has emerged as a promising research area in the recent years. Image fusion basically aims at integrating information from the source images on pixel basis thus obtaining a more precise and complete information about an object.

The actual fusion process can be carried out at various levels. Under this, in the pixel-level image fusion the fused images provided all relevant information present in original images with no artifacts or inconsistencies. The pixel-level image fusion were classified into spatial domain fusion and transform domain fusion. Spatial domain fusion is directly applied on the source images which in turn reduce the signal-to-noise ratio of the resultant image with simple averaging technique but the spatial distortion still persists in the fused image. To improve on that in transform domain fusion, firstly the input images are decomposed based on transform coefficients. Then the fusion technique is applied and the fusion decision map is obtained. Inverse transformation on this decision map yields the fused image. The fused image carries all the details of the source images and reduces the spatial distortion. So, majority of the earlier fusion techniques were based on wavelet transformation. In wavelet transformation, the DWT technique resulted in an image with shift variance and additive noise [6]. These problems can be overcome using RDWT and Mamdani type MIN-SUM-MOM fusion technique.

RDWT technique preserves the overall information (exact edge and spectral) from the input images without any loss of spatial distortion. Here, the high pass and low pass sub bands of the input images are fused using average method and entropy method respectively.

The Mamdami type MIN-SUM-MOM technique basically utilizes fuzzy implication operation for the MIN algorithm, calculating the membership degree of the derived output fuzzy sets uses the Sum algorithm and the Defuzzification of the output set uses MOM algorithm [3].

The performances of different fusion techniques areevaluated using different quantitative metrics namely OCE, PSNR, SNR, SSIM and MI.

The rest of the sections are organized in this paper as follows: In section-II, the system design is discussed briefly;

54978-1-4673-0255-5/12/$31.00 c©2012 IEEE

Page 2: Medical Image Fusion Based on Redundancy DWT

section-III deals with experimental results and performance of the proposed method evaluation based on the quantitative metrics; section-IV, discussion and future work aresummarized.

II. SYSTEM DESIGN

In this new system, two different modality medical images CT and MRI-T2 are taken as input. Then the registered input images are fused using the Mamdani type MIN-SUM-MOMand RDWT technique. Finally, fused image information is analyzed using the quantitative metrics. The overall system design is shown in Fig. 1.

The dataset (images) are collected from Indian Scan Center, Madurai, Tamil Nadu, India. The images are acquired from the organ (Brain) at different modality such as CT and MRI (MR-T2). Each set of images were taken from the same patient at different modality [10].

Figure 1. An overall system design

A. Mamdani type minimum-sum-mean of maximum

Mamdani type minimum-sum-mean of maximum is pixel based fusion technique. This technique is implemented in three stages. In the first stage, fuzzy sets are used to describe the gray levels of the input images and building the fuzzy inference system (FIS). In second stage, fuzzy inference is carried out according to the fuzzy rules and the membership degree of each output pixel is obtained.Finally, the output gray level value is calculated through defuzzification and the fused image is obtained [2].

1) Fuzzification of inputs and setting up of membership function:

The input grayscale images have pixel values in the range 0-255. So there are a total of 256 gray levels. Divide these gray levels into a fuzzy set {VS, S, M, L, VL} which has five membership functions as follows: VS – very small; S – small;

M – medium; L – large; VL – very large. The output image also has 256 gray levels and uses the same fuzzy set. For simulation purpose triangular membership function is used because compared to Gaussian, Trapezoidal membership functions it requires less computational quantities. So, triangular membership function is selected in this paper, and then the input and output membership functions are builded as shown in Fig. 2 and Fig. 3.

Figure 2. The Inputs Membership Function Curves

Figure 3. The Output Membership Function Curves

2) Fuzzy Rules:

In this paper, with Mamdani-type fuzzy inference the rules are of the “IF-THEN” form as given below:

� If (CT is VS) and (MRI is VS) then fused image is VS.

� If (CT is VS) and (MRI is S) then fused image is S.

� If (CT is VS) and (MRI is M) then fused image is M.

� If (CT is VS) and (MRI is H) then fused image is H.

� If (CT is S) and (MRI is VS) then fused image is S.

� If (CT is S) and (MRI is S) then fused image is S.

� If (CT is S) and (MRI is M) then fused image is M.

� If (CT is S) and (MRI is H) then fused image is H.

� If (CT is M) and (MRI is VS) then fused image is M.

� If (CT is M) and (MRI is S) then fused image is M.

� If (CT is M) and (MRI is M) then fused image is M.

� If (CT is M) and (MRI is L) then fused image is L.

� If (CT is L) and (MRI is VS) then fused image is L.

2012 International Conference on Recent Advances in Computing and Software Systems 55

Page 3: Medical Image Fusion Based on Redundancy DWT

� If (CT is L) and (MRI is S) then fused image is L.

� If (CT is L) and (MRI is M) then fused image is L.

� If (CT is L) and (MRI is L) then fused image is L.

� If (CT is VL) or (MRI is VL) then fused image is VL.

3) MIM-SUM-MOM ALGORITHM:

On the basis of MIN-SUM-MOM algorithm, create a FIS which included two inputs, one output and seventeen fuzzy rules. Using MIN-SUM-MOM algorithm the fuzzy inference is carried out according to the three given rules and the membership function for the output fuzzy set is obtained.

In this paper fuzzy rules have two antecedents. Therelationship is defined by logical AND or logical OR. If it is logical AND, MAX algorithm is employed for inferring the output membership degree, otherwise MIN algorithm [3].

Considering that k1 expresses the gray value of input image CT, k2 the gray value of another input image MRI and t thegray value of output fused image we have:

μF1 (t) = max{μVS(k1),μL(k2)} (1)

μF2 (t) = max{μS(k1),μL(k2)} (2)

μF3 (t) = min{μVL(k1),μVL(k2)} (3)

where μVS(k1) denotes the membership degree to which the element k1 belongs to fuzzy subset VS. On calculating the sum of above three results the membership function of the total output fuzzy subset F is obtained as

μF (t) = 1, μF1(t) + μF2(t) + μF3(t) >=1 (4) μF1(t) + μF2(t) + μF3(t) , otherwise

To get the gray value of the fused image we use defuzzification operation. In this paper MOM defuzzification is used. Defining a set as given below

r(F) = { t � T | μF (t) =sup t � T μF (t) } (5)

where r(F) is a set consisting of upper values of μF(t) as t belongs T. Now the MOM defuzzification is given as:

t* = � r(F) tdt / � r(F) dt (6)

where r(F) is the conventional integral as r(F) is continuous, and can also be the sum as r(F) is discrete.

B. Redundancy Discrete Wavelet Transform

RDWT fusion technique is performed on two registered images with different modality (CT and MRI-T2).

At the outset we consider X and Y as two registered images of different modality CT and MRI-T2. The input images are decomposed into three levels of RDWT decomposition using

Daubechies filters on both the images in order to produce an approximate wavelet bands as shown in Fig. 4 [5].

Figure 4. Three Levels of Decomposition

In the first stage image Ii is decomposed into Iia, Ii

b, Iic, Ii

d and image Iii is decomposed into Iii

a, Iiib, Iii

c, Iiid which are the

corresponding RDWT subbands. To utilize the features from both the images, the coefficient values from the two approximated subbands are averaged.

IMa = mean(Ii

a,Iiia) (7)

Where IMa is the fused image approximation band.

In the second stage the remaining three complete subbands LH, HL, HH are divided into blocks of size 3*3 and the entropy of each block is calculated, as shown below

ejpq = ln� (μj

pq- �3,3x,y=1Ij

pq(x,y)/�jpq)2/n2 (8)

Where (p=b,c,d) denotes the RDWT subbands, n=3 denotes the size of each block, q represents the block number, and j=(i,ii) is used for differentiating the two multimodal medical images Ii and Iii. The mean and standard deviation of the RDWT coefficients is given by μj

pq and �jpq. This entropy

value in turn helps in generating the detailed subbands for the fused image IM

b, IMc, IM

d as shown in (9). The fused image block IM

pq is derived , RDWT coefficients from image Ii is selected if the value of entropy for the specific block of Ii is greater than that of image Iii, otherwise Iii

pq is selected [9].

IMpq = Ii

pq , if (e1pq>e2

pq) (9) Iii

pq, otherwise

Finally, inverse RDWT is applied on all the subbands to generate the resultant fused medical image IM.

IM = IRDWT(IMa,IM

b,IMc,IM

d) (10)

C. Quantitative Analysis

56 2012 International Conference on Recent Advances in Computing and Software Systems

Page 4: Medical Image Fusion Based on Redundancy DWT

Performance evaluation method is required in common to determine the result of any technique in terms of qualitative and quantitative analysis.The quantitative assessment is done on the fused images by means of various quality measures. It helps to verify the fact that the fused images are more informative and that the fusion techniques applied are satisfactory. The following section explains the quantitative metrics used in the analysis of the proposed system.

1) Overall Cross Entropy (OCE): It gives the difference between the input images and the fused image. Lower the value, better is the fusion results obtained [10]. It is given as

OCE (IA,IB,F) = (CE(IA,F)+CE(IB,F)) / 2 (11)

where IA, IB are the input images of different modality, F is the fused image, CE(IA,F) and CE(IB,F) is the cross entropy of the input images with the fused image.

2) Peak Signal to Noise Ratio (PSNR): PSNR is used to measure the quality of the image with respect to the original input image. It is defined as given below [7]:

MSE=1/pq �p-1i=0 �

q-1j=0 [A(i,j)-B(i,j)]2 (12)

PSNR = 10 log10 (MAX2/MSE) (13)

where MAX is the maximum value in an image. p, q are the height and weight of an image. A(i,j) is the value of input image and B(i,j) is the value of fused image.

3) Signal to Noise Ratio (SNR): It is defined as the ratio of mean pixel value to that of standard deviation of the corresponding pixel values [8].

SNR = Mean / Standard Deviation (14)

It gives the contrast information of an image. Higher value indicates more contrast.

4) Structural Similarity Index (SSIM): It gives the association between the structural information changes within the images and the perceived distortion of the images. It is defined as a measure to assess similarity between two images A and B by the expression [8]

SSIM(A,B) = (2(μAμB+k1)*2(�AB+k2)) / (15) ((μA

2+μB2+k1)*( �A

2+�B2+k2))

where μA and μB denotes the mean intensities, �A and �B

denotes the standard deviation, �AB gives the covariance of A and B, k1 and k2 are constants.

5) Mutual Information (MI) : Let A and B be the two registered multimodal images, then mutual information is given as

M(A,B) = K(A) + K(B) - K(A,B) (16)

where K(A) denotes the entropy of image A, K(B) denotes entropy of image B and K(A,B) gives the joint entropy. Registering A with respect to B maximizes the mutual information between A and B, thus maximizing the entropy values K(A) and K(B) and minimizing the joint entropy K(A,B). A higher value of M(A,B) justifies a better fusion algorithm [4].

III. EXPERIMENTAL RESULTS

The following steps in sequence explain the fusion of input images with their performance analysis.

Step 1: Input images of CT and MRI (MR-T2) are

considered as one group. All images have the same size of 256 * 256 pixels, with 256-level gray scale. Totally, six groups of images are used in analysis. Some sample set of input images (Dataset-1 and Dataset-4) are shown in fig. 5 and fig. 6.

CT MRI (MR-T2)

Figure 5. Original Multimodality Image Dataset 1.

CT MRI (MR-T2)

Figure 6. Original Multimodality Image Dataset 4.

Step 2: The input images of each dataset are fused using Mamdani type MIN-SUM-MOM and Redundant Discrete Wavelet Transform and some of the samples are shown in fig. 7 and Fig. 8.

RDWT - 1 RDWT - 2

Figure 7. Fused Images of Dataset 1

2012 International Conference on Recent Advances in Computing and Software Systems 57

Page 5: Medical Image Fusion Based on Redundancy DWT

RDWT - 3 MSM Figure 8. Fused Images of Dataset 1

Step 3: The performances of the fused images are analysed with quantitative metrics discussed in the section II. The result of all metrics are shown in table I and also shown as graph in figure (fig.9 to fig.13) where in each graph x axis specifies the dataset (1–6) and y axis denotes the values of the metric derived from the corresponding fusion techniques

.

TABLE I

Comparison of image fusion algorithm for CT and MRI – T2 time

Metrics Algorithm Dataset1 Dataset2 Dataset3 Dataset4 Dataset5 Dataset6

OCE RDWT-1

RDWT-2

RDWT-3

MSM

1.0705

1.7954

1.3620

0.6711

3.7607

3.2571

3.0505

1.1062

3.7873

3.1154

2.7211

0.8634

2.7278

3.7660

3.3382

0.9497

2.9605

3.7405

3.3529

0.9794

2.2524

3.0920

3.3516

0.7773

PSNR RDWT-1

RDWT-2

RDWT-3

MSM

47.6414

45.9942

48.3606

56.5536

47.7731

45.9836

48.0577

56.2715

47.8608

46.2343

48.3422

56.7041

47.8931

46.1345

48.3951

56.7312

48.0332

46.1042

48.1451

56.7933

47.8314

46.2141

49.2433

57.1547

SNR RDWT-1

RDWT-2

RDWT-3

MSM

0.4576

0.4694

0.4441

0.4812

0.4325

0.4405

0.4048

0.4532

0.4255

0.4313

0.4065

0.4413

0.4384

0.4467

0.4236

0.4434

0.4540

0.4624

0.4362

0.4614

0.4507

0.4335

0.4390

0.4641

SSIM RDWT-1

RDWT-2

RDWT-3

MSM

54.6852

59.4354

60.7342

64.7323

53.0165

58.6767

59.0593

66.7851

54.8243

60.1493

60.1309

70.6996

56.1597

61.3857

62.0662

68.2089

55.5667

59.8026

60.9873

66.8447

54.6788

59.6076

61.5968

68.7561

MI RDWT-1

RDWT-2

RDWT-3

MSM

60.7772

60.4854

60.1853

62.9324

60.0504

59.8624

59.6415

63.0753

60.9053

60.5463

60.2893

62.2324

60.5664

60.2553

59.9942

62.0294

60.2753

60.0164

59.9614

62.5595

60.8612

60.5293

60.2493

63.1621

Figure 9. Graph-1 Overall Cross Entropy Calculation Figure 10. Graph-2 Peak Signal to Noise Ratio Calculation

58 2012 International Conference on Recent Advances in Computing and Software Systems

Page 6: Medical Image Fusion Based on Redundancy DWT

Figure 11. Graph-3 Signal to Noise Ratio Calculation

Figure 12. Graph-4 Structural Similarity Index Calculation

Figure 13. Graph-5 Mutual Information Calculation

IV. CONCLUSIONS AND FUTURE WORK

In this paper, we proposed two different fusion techniques algorithm which are analyzed with quantitative metrics for six sets of brain images acquired from CT and MRI-T2. The experimental result shows that Mamdani type MIN-SUM-MOM outperforms RDWT from the visual perspective and is also more satisfactory as verified with the quantitative metrics. From Table1 and figure 9,10,11,12,13 we find that lower value of OCE in case of MIN-SUM-MOM indicatesbetter fused images, higher values of PSNR signifies better quality of images for MIN-SUM-MOM, higher values for SNR justifies that contrast information for fused images were

higher in MIN-SUM-MOM, higher values of SSIM in case of MIN-SUM-MON justifies that the fused images were similar to the original input images and higher values of MI suggest that MIN-SUM-MOM gives better fusion results when compared to RDWT. Thus the fused image obtained from MIN-SUM-MOM is more informative and suitable from the clinical perspective, for efficient retrieval purpose and the fused images are also obtained quickly so it is advisable.

In future we would like to extend this fusion technique in order to fuse multiple sensor images (CT, MRI, PET and SPECT), motion images and color images for providing better fusion result for dealing with real time scenario.

ACKNOWLEDGMENT

The authors would like to thank Indian Scan Center,Madurai for providing the Brain images of same patient at different modality.

REFERENCES

[1] Firooz Sadjadi, “Comparative Image Fusion Analysis”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 3, June 2005.

[2] Harpreet Singh, Jyoti Raj, Gulsheen Kaur ,Thomas Meitzler, “ Image Fusion using Fuzzy Logic and Applications”, Vol. XXVIII-1749, pp. 25-29 July,2004.

[3] Jionghua Teng, Suhuan Wang, Jingzhou Zhang, Xue Wang, “Fusion Algorithm of Medical Images Based on Fuzzy Logic”, Seventh International Conference on Fuzzy Systems and Knowledge Discovery, August 2010.

[4] Josien .P.W.Pluim,J.B .Antoine Maintz and Max A.Veirgever , “ Mutual Information based Registration of Medical images –a Survey”, IEEE Transaction on Medical Imaging , Vol . XX, pp-986-1004,13-20 August 2003.

[5] Ligia Chiorean, Mircea-Florin Vaida, “Medical Image Fusion Based on Discrete Wavelet Transform using Java Technology, 31st International Conference on Information Technology Interfaces, pp. 22-25, June 2009.

[6] Oliver Rockinger, “Image Sequence Fusion Using a Shift-Invariant Wavelet Transform”, International Conference on Image Processing, vol. 3, pp. 288, October 1997.

[7] Pamela C. Cosman, Richard A. Olshen, “ Evaluating Quality of Compressed Medical Images: SNR, Subjective Rating and Diagnostic Accuracy”, Proceedings of the IEEE, vol. 82, pp. 919-932, August 2002.

[8] R.Maruthi, R.M.Suresh, “Metrics for Measuring the Quality of Fused Images”, International Conference on Computational Intelligence and Multimedia Applications, December16,2007.

[9] Richa Singh, Mayank Vatsa, “ Multimodal Medical Image Fusion using Redundant Discrete Wavelet Transform”, In proceedings of Seventh International Conference on Advances in Pattern Recognition, pp. 232-235, February 2009.

[10] S.Rajkumar, S.Kavitha, “ Redundancy Discrete Wavelet Transform and Contourlet Transform for Multimodality Medical Image Fusion with Quantitative Analysis”, 3rd International Conference on Emerging Trends in Engineering and Technology, November 2010.

2012 International Conference on Recent Advances in Computing and Software Systems 59