Top Banner
Local and Non-Metric Similarities Between Images - Why, How and What for ? Frédéric Morain-Nicolier CReSTIC - URCA - Troyes 2012.09.25 1
70

2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Dec 16, 2014

Download

Technology

 
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local and Non-Metric Similarities Between Images

-Why, How and What for ?

Frédéric Morain-NicolierCReSTIC - URCA - Troyes

2012.09.25

1

Page 2: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Outlines

• Local similarities• Non metric similarities• Conclusion :

• Local non-metric similarities ? • solutions• open problems

2

Page 3: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Similarities

3

Page 4: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Similarities, Why ?

4

Page 5: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

3D Acquisition

5

?

[Troyes Library - Early Rennaissance Collection - Bibliothèque Bleue]

Page 6: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

3D Acquisition

6

[Thanks to Le2I - Le Creusot Team]

Page 7: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

3D Acquisition : Economical Solution

7

Page 8: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

a stamp, the printing zones are high elevation ones;

the high grey-level pixels in the range images have to

be binarized as black (pixel50). The non-printing

zones must therefore be binarized as white (pixel51).

A modified Niblack’s algorithm is used to binarize

range images.11,12 The main task is to adapt the

threshold over the image. The threshold is deter-

mined from the local mean and the local standard

deviation, computed on a restricted neighbourhood

for each pixel.

Define:

m the local mean computed on a [w6w]

neighbourhood

M the global mean computed on the complete

range image

s the local standard deviation computed on a

[w6w] neighbourhood

t the local threshold for each pixel

k a user-defined parameter.

The binarization is computed as follows:

For each pixel of the range image Do

If m.M Then t5mzk*s (the neighbourhood

is in a high elevation zone)

Else t5Mzk*s (M.m and the neigh-

bourhood is in a low zone)

End if

If pixel.t Then pixel50

Else pixel51

End if

End Do

The local threshold computing can be summarized

by equation (1)

t~max(M,m)zk ! s (1)

By modifying the k parameter, it is possible to

simulate the inking and printing process for various

conditions (ink quantity, paper quality or humidity,

ink fluidity, exerted pressure etc.). Figure 8 shows the

same range image as that used in Fig. 7, binarized for

k50.1–0.8. Note the ‘inking’ variations produced by

the k value modification.

3.3 Comparison between virtual and real stamping

In order to test the proposed method, the results of

virtual printing were compared with real ones. For

that, engraved wooden stamps whose actual printings

are already known were used. Figures 9–12 show the

7 Range image (‘Pisces’ stamp) computed from projec-

tion of point cloud6 3D View of acquired points of ‘Pisces’ stamp: point

cloud acquired with Minolta scanner

8 Example of virtual stamping: range image (see Fig. 7) is binarized according to adaptative

threshold; variation in k (0.1–0.8) visually corresponds to amount of ink

ANALYSIS AND CONSERVATION OF ANCIENT WOODEN STAMPS 115

IMAG mp068 # RPS 2006 The Imaging Science Journal Vol 54

3D Acquisition : An Example

8

a stamp, the printing zones are high elevation ones;

the high grey-level pixels in the range images have to

be binarized as black (pixel50). The non-printing

zones must therefore be binarized as white (pixel51).

A modified Niblack’s algorithm is used to binarize

range images.11,12 The main task is to adapt the

threshold over the image. The threshold is deter-

mined from the local mean and the local standard

deviation, computed on a restricted neighbourhood

for each pixel.

Define:

m the local mean computed on a [w6w]

neighbourhood

M the global mean computed on the complete

range image

s the local standard deviation computed on a

[w6w] neighbourhood

t the local threshold for each pixel

k a user-defined parameter.

The binarization is computed as follows:

For each pixel of the range image Do

If m.M Then t5mzk*s (the neighbourhood

is in a high elevation zone)

Else t5Mzk*s (M.m and the neigh-

bourhood is in a low zone)

End if

If pixel.t Then pixel50

Else pixel51

End if

End Do

The local threshold computing can be summarized

by equation (1)

t~max(M,m)zk ! s (1)

By modifying the k parameter, it is possible to

simulate the inking and printing process for various

conditions (ink quantity, paper quality or humidity,

ink fluidity, exerted pressure etc.). Figure 8 shows the

same range image as that used in Fig. 7, binarized for

k50.1–0.8. Note the ‘inking’ variations produced by

the k value modification.

3.3 Comparison between virtual and real stamping

In order to test the proposed method, the results of

virtual printing were compared with real ones. For

that, engraved wooden stamps whose actual printings

are already known were used. Figures 9–12 show the

7 Range image (‘Pisces’ stamp) computed from projec-

tion of point cloud6 3D View of acquired points of ‘Pisces’ stamp: point

cloud acquired with Minolta scanner

8 Example of virtual stamping: range image (see Fig. 7) is binarized according to adaptative

threshold; variation in k (0.1–0.8) visually corresponds to amount of ink

ANALYSIS AND CONSERVATION OF ANCIENT WOODEN STAMPS 115

IMAG mp068 # RPS 2006 The Imaging Science Journal Vol 54

Stamp 3D model Range Image

Page 9: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

3D Acquisition ➙ Virtual Printing

9

a stamp, the printing zones are high elevation ones;

the high grey-level pixels in the range images have to

be binarized as black (pixel50). The non-printing

zones must therefore be binarized as white (pixel51).

A modified Niblack’s algorithm is used to binarize

range images.11,12 The main task is to adapt the

threshold over the image. The threshold is deter-

mined from the local mean and the local standard

deviation, computed on a restricted neighbourhood

for each pixel.

Define:

m the local mean computed on a [w6w]

neighbourhood

M the global mean computed on the complete

range image

s the local standard deviation computed on a

[w6w] neighbourhood

t the local threshold for each pixel

k a user-defined parameter.

The binarization is computed as follows:

For each pixel of the range image Do

If m.M Then t5mzk*s (the neighbourhood

is in a high elevation zone)

Else t5Mzk*s (M.m and the neigh-

bourhood is in a low zone)

End if

If pixel.t Then pixel50

Else pixel51

End if

End Do

The local threshold computing can be summarized

by equation (1)

t~max(M,m)zk ! s (1)

By modifying the k parameter, it is possible to

simulate the inking and printing process for various

conditions (ink quantity, paper quality or humidity,

ink fluidity, exerted pressure etc.). Figure 8 shows the

same range image as that used in Fig. 7, binarized for

k50.1–0.8. Note the ‘inking’ variations produced by

the k value modification.

3.3 Comparison between virtual and real stamping

In order to test the proposed method, the results of

virtual printing were compared with real ones. For

that, engraved wooden stamps whose actual printings

are already known were used. Figures 9–12 show the

7 Range image (‘Pisces’ stamp) computed from projec-

tion of point cloud6 3D View of acquired points of ‘Pisces’ stamp: point

cloud acquired with Minolta scanner

8 Example of virtual stamping: range image (see Fig. 7) is binarized according to adaptative

threshold; variation in k (0.1–0.8) visually corresponds to amount of ink

ANALYSIS AND CONSERVATION OF ANCIENT WOODEN STAMPS 115

IMAG mp068 # RPS 2006 The Imaging Science Journal Vol 54

local threshold

Page 10: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

3D : Virtual vs. Real Fidelity ?

10

Virtual RealResolution !

Page 11: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

A Local Comparison is needed !

11

Page 12: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

A Local Comparison is Needed !

12

Page 13: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

A Local Comparison is Needed !

13

small diffs

big localised diffs

scattered diffs

Page 14: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

A Local Comparison is Needed !

14

Page 15: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Similarities,How ?

15

Page 16: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map (LDM)

16

Figure 7.4 – Comparaison de deux groupes de lettres. L’image de la CDL indique clairementles localisations des dissimilarités. La comparaison est également quantifiée.

Figure 7.5 – Le jeu des dix erreurs. Les images à comparer sont A et B. L’image C résulte dela di�érence pixel à pixel |A� B|. L’image D est la CDL entre A et B où les erreurs introduitespar le dessinateur ont été entourées.

Un second exemple est donné en figure 7.4. Deux groupes de lettres sont comparés. Lesdissimilarités de la CDL sont quantifiées : des écarts importants sont traduits par des valeursimportantes (en foncé sur l’image c). De plus ces dissimilarités sont localisées. Ainsi il estpossible de constater que les écarts les plus importants sont situés au niveau de la lignehorizontale du « e » et de la partie supérieure du « t ». Cette information n’aurait pu êtreextraite d’une di�érence simple que suite à un post-traitement.

Étienne Baudrier, le doctorant qui a travaillé sur ce sujet, avait trouvé un moyen de gagnerdu temps pendant ses vacances (figure 7.5). La seconde image (b) est une copie de la première(a) où le dessinateur a volontairement introduit des erreurs. L’image (c) est la di�érence pixelà pixel entre (a) et (b). Cette image ne fournit aucune information utile à la comparaison. Parcontre, la carte des dissimilarités locales (figure 7.5d) met en valeur la plupart des erreurs.

86

Figure 7.4 – Comparaison de deux groupes de lettres. L’image de la CDL indique clairementles localisations des dissimilarités. La comparaison est également quantifiée.

Figure 7.5 – Le jeu des dix erreurs. Les images à comparer sont A et B. L’image C résulte dela di�érence pixel à pixel |A� B|. L’image D est la CDL entre A et B où les erreurs introduitespar le dessinateur ont été entourées.

Un second exemple est donné en figure 7.4. Deux groupes de lettres sont comparés. Lesdissimilarités de la CDL sont quantifiées : des écarts importants sont traduits par des valeursimportantes (en foncé sur l’image c). De plus ces dissimilarités sont localisées. Ainsi il estpossible de constater que les écarts les plus importants sont situés au niveau de la lignehorizontale du « e » et de la partie supérieure du « t ». Cette information n’aurait pu êtreextraite d’une di�érence simple que suite à un post-traitement.

Étienne Baudrier, le doctorant qui a travaillé sur ce sujet, avait trouvé un moyen de gagnerdu temps pendant ses vacances (figure 7.5). La seconde image (b) est une copie de la première(a) où le dessinateur a volontairement introduit des erreurs. L’image (c) est la di�érence pixelà pixel entre (a) et (b). Cette image ne fournit aucune information utile à la comparaison. Parcontre, la carte des dissimilarités locales (figure 7.5d) met en valeur la plupart des erreurs.

86

LDM

Figure 7.4 – Comparaison de deux groupes de lettres. L’image de la CDL indique clairementles localisations des dissimilarités. La comparaison est également quantifiée.

Figure 7.5 – Le jeu des dix erreurs. Les images à comparer sont A et B. L’image C résulte dela di�érence pixel à pixel |A� B|. L’image D est la CDL entre A et B où les erreurs introduitespar le dessinateur ont été entourées.

Un second exemple est donné en figure 7.4. Deux groupes de lettres sont comparés. Lesdissimilarités de la CDL sont quantifiées : des écarts importants sont traduits par des valeursimportantes (en foncé sur l’image c). De plus ces dissimilarités sont localisées. Ainsi il estpossible de constater que les écarts les plus importants sont situés au niveau de la lignehorizontale du « e » et de la partie supérieure du « t ». Cette information n’aurait pu êtreextraite d’une di�érence simple que suite à un post-traitement.

Étienne Baudrier, le doctorant qui a travaillé sur ce sujet, avait trouvé un moyen de gagnerdu temps pendant ses vacances (figure 7.5). La seconde image (b) est une copie de la première(a) où le dessinateur a volontairement introduit des erreurs. L’image (c) est la di�érence pixelà pixel entre (a) et (b). Cette image ne fournit aucune information utile à la comparaison. Parcontre, la carte des dissimilarités locales (figure 7.5d) met en valeur la plupart des erreurs.

86

Figure 7.4 – Comparaison de deux groupes de lettres. L’image de la CDL indique clairementles localisations des dissimilarités. La comparaison est également quantifiée.

Figure 7.5 – Le jeu des dix erreurs. Les images à comparer sont A et B. L’image C résulte dela di�érence pixel à pixel |A� B|. L’image D est la CDL entre A et B où les erreurs introduitespar le dessinateur ont été entourées.

Un second exemple est donné en figure 7.4. Deux groupes de lettres sont comparés. Lesdissimilarités de la CDL sont quantifiées : des écarts importants sont traduits par des valeursimportantes (en foncé sur l’image c). De plus ces dissimilarités sont localisées. Ainsi il estpossible de constater que les écarts les plus importants sont situés au niveau de la lignehorizontale du « e » et de la partie supérieure du « t ». Cette information n’aurait pu êtreextraite d’une di�érence simple que suite à un post-traitement.

Étienne Baudrier, le doctorant qui a travaillé sur ce sujet, avait trouvé un moyen de gagnerdu temps pendant ses vacances (figure 7.5). La seconde image (b) est une copie de la première(a) où le dessinateur a volontairement introduit des erreurs. L’image (c) est la di�érence pixelà pixel entre (a) et (b). Cette image ne fournit aucune information utile à la comparaison. Parcontre, la carte des dissimilarités locales (figure 7.5d) met en valeur la plupart des erreurs.

86

Φ( , )

Which measure ?

Which size ?

Page 17: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map

17

Which measure between small

images ?

• MSE - PSNR?

➡ pixel to pixel diffs

➡ low informationandhard to interpret

dA,B(p) = |A(p)�B(p)|A

B

|A(p)�B(p)|

Page 18: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map

18

Which measure between small

images ? DH(A,B) = max (h(A,B), h(B,A))

h(A,B) = max

a2A

✓min

b2Bd(a, b)

h(B,A)

DH(A, TvA) = kvk

hK(A,B) = Kiemea2A d(a,B)

➡ numerous variations, including partial HD

• Binary image = set of pixels (foreground)

➡ Hausdorff distance [Huttenlocher 1993]

A

B

DH(A,B)

Page 19: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

(a) (b) (c)

Figure 7.2 – Illustration de la notion de dissimilarité locale. Les image A et B (en a et b) sontdissimilaires. Les fenêtres de taille petite et moyenne (en c), ne permettent pas d’appréhenderla dissimilarité locale au centre des images A et B car trop petites.

Nous pouvons donner une idée générale. Si les pixels situés dans la fenêtre glissante ap-partiennent à des traits grossiers, la fenêtre doit avoir une taille su⇥sante pour englober lesécarts résultant de la comparaison de ces traits. Si les traits sont fins, il ne faut pas que lafenêtre soit trop grande. Dans le cas contraire, des écarts qui ne sont plus locaux vont rentreren compte et fausser la valeur de la mesure. Il est ainsi indispensable d’adapter la taille de lafenêtre.

L’idée est concrétisée en prenant un exemple de comparaison synthétique, donné en figure7.2. L’objectif est de mesurer la dissimilarité au centre des deux images A et B. Avec les deuxplus petites fenêtres, les extraits de A sont entièrement noirs et ceux de B sont entièrementblancs. Ces extraits ne peuvent donc rendre compte de la taille de cette dissimilarité locale.La plus grande taille est adéquate pour avoir toute l’information requise.

Ainsi la taille de la fenêtre est une information utile dans la construction de la dissimilarité.La taille la plus adaptée dépend de la variation de l’information lorsque la fenêtre croît.

7.1.4.2 Localisation de la distance de Hausdor�

Nous souhaitons mesurer la distance de Hausdor� non plus entre deux images, mais entredeux extraits sélectionnés par une fenêtre. Il faut donc définir la mesure de la distance deHausdor� dans une fenêtre. La solution la plus immédiate consiste à restreindre les pointsintervenant dans le calcul à ceux appartenant à la fenêtre.

Définition 7.5. (DH restreinte à une fenêtre W , version naïve)

DHW (A,B) = max (hW (A,B), hW (B,A)) (7.23)

82

(a) (b) (c)

Figure 7.2 – Illustration de la notion de dissimilarité locale. Les image A et B (en a et b) sontdissimilaires. Les fenêtres de taille petite et moyenne (en c), ne permettent pas d’appréhenderla dissimilarité locale au centre des images A et B car trop petites.

Nous pouvons donner une idée générale. Si les pixels situés dans la fenêtre glissante ap-partiennent à des traits grossiers, la fenêtre doit avoir une taille su⇥sante pour englober lesécarts résultant de la comparaison de ces traits. Si les traits sont fins, il ne faut pas que lafenêtre soit trop grande. Dans le cas contraire, des écarts qui ne sont plus locaux vont rentreren compte et fausser la valeur de la mesure. Il est ainsi indispensable d’adapter la taille de lafenêtre.

L’idée est concrétisée en prenant un exemple de comparaison synthétique, donné en figure7.2. L’objectif est de mesurer la dissimilarité au centre des deux images A et B. Avec les deuxplus petites fenêtres, les extraits de A sont entièrement noirs et ceux de B sont entièrementblancs. Ces extraits ne peuvent donc rendre compte de la taille de cette dissimilarité locale.La plus grande taille est adéquate pour avoir toute l’information requise.

Ainsi la taille de la fenêtre est une information utile dans la construction de la dissimilarité.La taille la plus adaptée dépend de la variation de l’information lorsque la fenêtre croît.

7.1.4.2 Localisation de la distance de Hausdor�

Nous souhaitons mesurer la distance de Hausdor� non plus entre deux images, mais entredeux extraits sélectionnés par une fenêtre. Il faut donc définir la mesure de la distance deHausdor� dans une fenêtre. La solution la plus immédiate consiste à restreindre les pointsintervenant dans le calcul à ceux appartenant à la fenêtre.

Définition 7.5. (DH restreinte à une fenêtre W , version naïve)

DHW (A,B) = max (hW (A,B), hW (B,A)) (7.23)

82

A B

Local Dissimilarity Map

➡ adaptative

➡ must encompass the «diffs» but no more

➡ increase until the measure equals its theoretical max

19

Which size for the sliding window ?

⇒ stopping criterion

Page 20: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map

20

Cette dernière propriété indique que si la distance de Hausdor� locale est nulle, alors lesextraits des deux images par la fenêtre W sont identiques.

7.1.4.3 Critère pour fixer la taille (locale) optimale de la fenêtre

Le but est de fournir ici un critère permettant de fixer la taille de la fenêtre glissante.Comme écrit précédemment, l’hypothèse est qu’une dissimilarité locale doit faire intervenirdes traits dont fait partie le pixel central de la fenêtre W .

Il est également souhaitable que - autant que possible - d’autres dissimilarités n’inter-viennent pas dans la mesure faite dans la fenêtre W , c’est à dire relatives à d’autres traitsdans les images. Ainsi la mesure doit concerner :

– un point central : si le point central n’est pas impliqué, la fenêtre W peut être déplacéepour qu’un des points concernés par la mesure soit en son centre ;

– et un point de la frontière W : si aucun de ces points n’est impliqué, la fenêtre peut êtreréduite.

Nous montrons que sous ces conditions, il existe une mesure locale maximale déterminée dansune fenêtre de rayon rmax fourni par le théorème qui suit.

Théorème 7.7. Soient A et B deux ensembles de points de R2 finis non-vides et p ⌅ R2, ence qui concerne la mesure locale de la distance de Hausdor� au point p, le rayon optimal rmax

pour la fenêtre W (p, r) de rayon r est

rmax = maxr>0

�⇤r|DHW (p,r)(A,B) = r

⌅⇥. (7.27)

En pratique ce théorème indique que tant que la distance de Hausdor� locale est égale aurayon de la fenêtre, la mesure optimale n’est pas atteinte.

7.1.4.4 Définition de la carte des dissimilarités locales

La carte des dissimilarités locales est définie maintenant aisément. Elle regroupe l’ensembledes mesures de dissimilarités locales réalisés pour di�érentes positions. L’algorithme générallorsque la distance locale est basée sur la distance de Hausdor� est le suivant :

Algorithme 7.1 Algorithme itératif de calcul de la carte des dissimilarités locales (CDL)entre deux images binaires A et B. W (p, n) désigne la fenêtre carrée centrée au pixel p et derayon n.Pour chaque pixel p, faire

1. n ⇤ 1

2. tant que DHW (p,n)(A,B) = n et n ⇥ DH(A,B), fairen ⇤ n+ 1

3. CDLA,B(p) = DHW (p,n�1)(A,B) = n� 1

84

In the Hausdorff distance case :

Page 21: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map

21

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B))

Avec la distance de Hausdorff :

With the Hausdorff distance : fast computation

Distance transform (or function) based

TDA(p) = d(p,A)

(distance to the nearest foreground pixel : very fast with a chamfer distance)

CDLA,B = A.TDB +B.TDA

⇒ linear expression (binary images) :

[Baudrier PhD - Pattern Recognition 2008]

Page 22: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map : Toy Examples

22

Figure 7.3 – Comportement de la CDL avec des motifs simples ; a,b,c sont les images àcomparer. d est la CDL entre a et b, e est la CDL entre b et c et f la CDL entre a et c. Plusle niveau de gris est foncé, plus grande est la valeur locale de la mesure.

Cet algorithme est coûteux en temps de calcul car itératif. En e�et, la complexité de calculest en O(m4) pour deux images composées de m ⇥m pixels. Nous avons montré que lorsquela distance de Hausdor� est utilisée dans la mesure locale de dissimilarité, le calcul peut êtretrès rapide.

Théorème 7.8. La carte des dissimilarités locales entre deux images binaires A et B estdonnée par

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B)) . (7.28)

Cette équation permet un calcul très rapide de la CDL entre deux images puisque d(p,A)

est la transformée en distances de l’image A (voir 7.1.3.1 page 77). La complexité du calcul dela CDL est alors la complexité de la transformée en distances, soit O(m2).

7.1.4.5 Quelques exemples synthétiques de cartes des dissimilarités locales

Le premier exemple est un cas simple de comparaison d’images structurées en lignes (figure7.3). La comparaison de l’image a et de l’image b est caractéristique du comportement de laCDL. La valeur de la CDL est nulle là où les lignes se croisent. Plus les pixels non-nuls ont uneposition éloignée de ce croisement, plus la valeur de la CDL est grande. L’interprétation estclaire : les deux images sont les plus similaires au centre. Les valeurs de la CDL augmententprogressivement. La valeur de la distance de Hausdor� globale est de 11, égale au maximumde la CDL. La CDL indique que cette valeur maximale est obtenue pour quatre pixels (situéeaux extrémités de lignes).

85

Figure 7.3 – Comportement de la CDL avec des motifs simples ; a,b,c sont les images àcomparer. d est la CDL entre a et b, e est la CDL entre b et c et f la CDL entre a et c. Plusle niveau de gris est foncé, plus grande est la valeur locale de la mesure.

Cet algorithme est coûteux en temps de calcul car itératif. En e�et, la complexité de calculest en O(m4) pour deux images composées de m ⇥m pixels. Nous avons montré que lorsquela distance de Hausdor� est utilisée dans la mesure locale de dissimilarité, le calcul peut êtretrès rapide.

Théorème 7.8. La carte des dissimilarités locales entre deux images binaires A et B estdonnée par

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B)) . (7.28)

Cette équation permet un calcul très rapide de la CDL entre deux images puisque d(p,A)

est la transformée en distances de l’image A (voir 7.1.3.1 page 77). La complexité du calcul dela CDL est alors la complexité de la transformée en distances, soit O(m2).

7.1.4.5 Quelques exemples synthétiques de cartes des dissimilarités locales

Le premier exemple est un cas simple de comparaison d’images structurées en lignes (figure7.3). La comparaison de l’image a et de l’image b est caractéristique du comportement de laCDL. La valeur de la CDL est nulle là où les lignes se croisent. Plus les pixels non-nuls ont uneposition éloignée de ce croisement, plus la valeur de la CDL est grande. L’interprétation estclaire : les deux images sont les plus similaires au centre. Les valeurs de la CDL augmententprogressivement. La valeur de la distance de Hausdor� globale est de 11, égale au maximumde la CDL. La CDL indique que cette valeur maximale est obtenue pour quatre pixels (situéeaux extrémités de lignes).

85

Figure 7.3 – Comportement de la CDL avec des motifs simples ; a,b,c sont les images àcomparer. d est la CDL entre a et b, e est la CDL entre b et c et f la CDL entre a et c. Plusle niveau de gris est foncé, plus grande est la valeur locale de la mesure.

Cet algorithme est coûteux en temps de calcul car itératif. En e�et, la complexité de calculest en O(m4) pour deux images composées de m ⇥m pixels. Nous avons montré que lorsquela distance de Hausdor� est utilisée dans la mesure locale de dissimilarité, le calcul peut êtretrès rapide.

Théorème 7.8. La carte des dissimilarités locales entre deux images binaires A et B estdonnée par

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B)) . (7.28)

Cette équation permet un calcul très rapide de la CDL entre deux images puisque d(p,A)

est la transformée en distances de l’image A (voir 7.1.3.1 page 77). La complexité du calcul dela CDL est alors la complexité de la transformée en distances, soit O(m2).

7.1.4.5 Quelques exemples synthétiques de cartes des dissimilarités locales

Le premier exemple est un cas simple de comparaison d’images structurées en lignes (figure7.3). La comparaison de l’image a et de l’image b est caractéristique du comportement de laCDL. La valeur de la CDL est nulle là où les lignes se croisent. Plus les pixels non-nuls ont uneposition éloignée de ce croisement, plus la valeur de la CDL est grande. L’interprétation estclaire : les deux images sont les plus similaires au centre. Les valeurs de la CDL augmententprogressivement. La valeur de la distance de Hausdor� globale est de 11, égale au maximumde la CDL. La CDL indique que cette valeur maximale est obtenue pour quatre pixels (situéeaux extrémités de lignes).

85

Figure 7.3 – Comportement de la CDL avec des motifs simples ; a,b,c sont les images àcomparer. d est la CDL entre a et b, e est la CDL entre b et c et f la CDL entre a et c. Plusle niveau de gris est foncé, plus grande est la valeur locale de la mesure.

Cet algorithme est coûteux en temps de calcul car itératif. En e�et, la complexité de calculest en O(m4) pour deux images composées de m ⇥m pixels. Nous avons montré que lorsquela distance de Hausdor� est utilisée dans la mesure locale de dissimilarité, le calcul peut êtretrès rapide.

Théorème 7.8. La carte des dissimilarités locales entre deux images binaires A et B estdonnée par

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B)) . (7.28)

Cette équation permet un calcul très rapide de la CDL entre deux images puisque d(p,A)

est la transformée en distances de l’image A (voir 7.1.3.1 page 77). La complexité du calcul dela CDL est alors la complexité de la transformée en distances, soit O(m2).

7.1.4.5 Quelques exemples synthétiques de cartes des dissimilarités locales

Le premier exemple est un cas simple de comparaison d’images structurées en lignes (figure7.3). La comparaison de l’image a et de l’image b est caractéristique du comportement de laCDL. La valeur de la CDL est nulle là où les lignes se croisent. Plus les pixels non-nuls ont uneposition éloignée de ce croisement, plus la valeur de la CDL est grande. L’interprétation estclaire : les deux images sont les plus similaires au centre. Les valeurs de la CDL augmententprogressivement. La valeur de la distance de Hausdor� globale est de 11, égale au maximumde la CDL. La CDL indique que cette valeur maximale est obtenue pour quatre pixels (situéeaux extrémités de lignes).

85

Figure 7.3 – Comportement de la CDL avec des motifs simples ; a,b,c sont les images àcomparer. d est la CDL entre a et b, e est la CDL entre b et c et f la CDL entre a et c. Plusle niveau de gris est foncé, plus grande est la valeur locale de la mesure.

Cet algorithme est coûteux en temps de calcul car itératif. En e�et, la complexité de calculest en O(m4) pour deux images composées de m ⇥m pixels. Nous avons montré que lorsquela distance de Hausdor� est utilisée dans la mesure locale de dissimilarité, le calcul peut êtretrès rapide.

Théorème 7.8. La carte des dissimilarités locales entre deux images binaires A et B estdonnée par

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B)) . (7.28)

Cette équation permet un calcul très rapide de la CDL entre deux images puisque d(p,A)

est la transformée en distances de l’image A (voir 7.1.3.1 page 77). La complexité du calcul dela CDL est alors la complexité de la transformée en distances, soit O(m2).

7.1.4.5 Quelques exemples synthétiques de cartes des dissimilarités locales

Le premier exemple est un cas simple de comparaison d’images structurées en lignes (figure7.3). La comparaison de l’image a et de l’image b est caractéristique du comportement de laCDL. La valeur de la CDL est nulle là où les lignes se croisent. Plus les pixels non-nuls ont uneposition éloignée de ce croisement, plus la valeur de la CDL est grande. L’interprétation estclaire : les deux images sont les plus similaires au centre. Les valeurs de la CDL augmententprogressivement. La valeur de la distance de Hausdor� globale est de 11, égale au maximumde la CDL. La CDL indique que cette valeur maximale est obtenue pour quatre pixels (situéeaux extrémités de lignes).

85

CDLa,b

Figure 7.3 – Comportement de la CDL avec des motifs simples ; a,b,c sont les images àcomparer. d est la CDL entre a et b, e est la CDL entre b et c et f la CDL entre a et c. Plusle niveau de gris est foncé, plus grande est la valeur locale de la mesure.

Cet algorithme est coûteux en temps de calcul car itératif. En e�et, la complexité de calculest en O(m4) pour deux images composées de m ⇥m pixels. Nous avons montré que lorsquela distance de Hausdor� est utilisée dans la mesure locale de dissimilarité, le calcul peut êtretrès rapide.

Théorème 7.8. La carte des dissimilarités locales entre deux images binaires A et B estdonnée par

CDLA,B(p) = |A(p)�B(p)|max (d(p,A), d(p,B)) . (7.28)

Cette équation permet un calcul très rapide de la CDL entre deux images puisque d(p,A)

est la transformée en distances de l’image A (voir 7.1.3.1 page 77). La complexité du calcul dela CDL est alors la complexité de la transformée en distances, soit O(m2).

7.1.4.5 Quelques exemples synthétiques de cartes des dissimilarités locales

Le premier exemple est un cas simple de comparaison d’images structurées en lignes (figure7.3). La comparaison de l’image a et de l’image b est caractéristique du comportement de laCDL. La valeur de la CDL est nulle là où les lignes se croisent. Plus les pixels non-nuls ont uneposition éloignée de ce croisement, plus la valeur de la CDL est grande. L’interprétation estclaire : les deux images sont les plus similaires au centre. Les valeurs de la CDL augmententprogressivement. La valeur de la distance de Hausdor� globale est de 11, égale au maximumde la CDL. La CDL indique que cette valeur maximale est obtenue pour quatre pixels (situéeaux extrémités de lignes).

85

CDLb,c

A quantified and localized information

Page 23: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map : Toy Examples

23

➡ structural informations

Page 24: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Dissimilarity Map : Toy Examples

24

3.7. Generalisation aux images en niveaux de gris 85

Fig. 3.4 – Le jeu des dix erreurs. Images A et B : les deux images a comparer, image C : la di↵erence absolue C = |B �A|

et image D : leur CDL ou les dix erreurs ont ete entourees de noir par nos soins.

➡How to save time during holidays ?

Page 25: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Similarities,What for ?

25

Page 26: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Ancients Printings

26

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 –1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 – 1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 – 1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

Page 27: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 – 1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 – 1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

Ancients Printings

27

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 –1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 – 1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461–1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

É. Baudrier et al. / Pattern Recognition 41 (2008) 1461 – 1478 1471

02468101214161820

0246810121416

024681012141618

0246810121416

Fig. 8. Medieval impressions and their LDMaps. Here are four medieval impressions. Imp. 1, Imp. 2 and Imp. 3 illustrate the same scene with a differentkind of grass and helmets in Imp. 3. Imp. 4 illustrates a distinct scene.

comparison methods are compared with the ones obtainedmanually. The five classification methods are the follo-wing ones:

• our method based on the LDMap,• the so-called Local Simple Difference Map (LSDMap) us-

ing the distance map, but with the simple difference locallyinstead of the HD: HSDW(F, G) = |F ! W " G ! W |,

• the global HD,• the PHD,• the MHD.

6.3.1. Test methodsA quantitative result is then obtained for each comparison

method thanks to a decision step. The decision step is different

whether the measure result is an image (case of the LDMapand the LSDMap) or a real value (case of the HD and its vari-ations). In the first case, the classification method is a SVMdescribed in Section 6.2. In the second case, an empirical dis-tribution for each class Csim and Cdissim is computed from thelearning set. As the modes of the empirical distributions arequite well defined, an easy and efficient classification methodis the maximum likelihood method.

6.3.2. ResultsResults are summarized in Table 2. They show the efficiency

of the LDMap both concerning spatial information (compari-son with the global HD and the PHD) and the ability of thelocal HD to catch the local dissimilarities (comparison with

Figure 7.9 – Comparaison d’illustrations anciennes. Les images a, b et c représentent la mêmescène. (e) CDLa,b ; (f) CDLa,c ; (g) CDLa,d ; (h) CDLc,d. La comparaison de scènes dissimilairesproduit des valeurs importantes réparties sur toute l’image (en g et h). La comparaison descènes similaires produit des valeurs importantes en faible nombre (en e) ou très localisées (f).

91

Page 28: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Ancients Printings

28

• LDM classification

• similar

• dissimilar

➡ SVM

Bonne classification (en %) CDL DPP DH PHD MHDpour Csim 98 90 60 83 77

pour Cdissim 97 92 75 81 83

Table 7.1 – Performances en classification d’images similaires issues de la base d’impressionsanciennes. CDL = carte des dissimilarités locales, DPP : di�érence pixel à pixel, DH : distancede Hausdor�, PHD : distance de Hausdor� partielle, MHD : distance de Hausdor� modifiée(voir 7.2.1.4).

permet d’atteindre les meilleurs résultats. De plus l’e⇥cacité de la décision est très bonne,avec peu de faux obtenus.

7.2.1.6 Robustesse aux dégradations

Dans le cadre de l’application ici considérée, les dégradations des images peuvent êtresoient des e�acements soit des taches. Dans le premier cas, un certain nombre de pixels noirsdisparaissent. Dans le second cas, des pixels noirs sont ajoutés. Nous avons modélisé de façontrès synthétique ces perturbations par un carré central noir (simulation d’une tache) ou d’uncarré blanc (simulation d’un e�acement). La taille du carré perturbant est variable. L’appren-tissage est le même que précédemment (i.e. sur des images propres) et la phase de test estréalisée sur des paires comportant une image propre et une image dégradée.

Les courbes de bonne classification (en utilisant la CDL) en fonction de la taille du carré,pour ces deux dégradations, sont données figure 7.10. Lorsque la taille de la dégradationaugmente, seule la classe Csim voit ses performances diminuer. Ceci est un comportementlogique puisque pour être dans Csim les images doivent être proches. L’introduction d’unedégradation dans une image éloigne les deux images d’une paire « similaire ».

La méthode de décision basée sur la carte des dissimilarités locales est beaucoup plusrobuste à l’e�acement qu’à l’apparition d’une tache. Lors de la simulation d’une tache, ungrand nombre de pixels noirs (donc de forme) sont introduits. C’est ces pixels qui sont pris encompte dans le calcul des distances sous-jacentes.

7.2.1.7 Robustesse à la translation

Cette étude vise à tester la résistance de la méthode à des défauts d’alignement des imagesà comparer. La robustesse à une translation horizontale est testée selon le même protocole quepour la robustesse aux dégradations. Le classifieur est entraîné sur les images sans translationet les tests sont réalisés entre une image sans translation et une image avec translation. Lesrésultats (en terme de pourcentage de bonne classification) sont représentés par le graphe dela figure 7.11.

93

[Baudrier PhD]

Page 29: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Tumor Evolution

29

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

Figure 7.12 – Un exemple de la segmentation d’une tumeur. Seule une coupe d’un volumeentier est représentée, à trois dates di�érentes.

À nouveau, seules les performances de classification dans Csim sont a�ectées. Cependantcette dégradation est progressive, sans chute abrupte. Ce bon comportement vient de la dis-tance Hausdor� qui donne une valeur proportionnelle à une translation donnée (voir pro-priété 7.4 page 79). La carte des dissimilarités locales hérite de cette propriété. Par contre,pour de grandes translations, la localité des dissimilarités n’existe plus. Cependant, des dé-fauts d’alignement se traduisent par de petites translations. Une décision basée sur la cartedes dissimilarités locales est donc robuste à ce type de défauts.

7.2.2 Quantification de l’évolution spatiale de tumeurs

Ce travail a été présenté dans la publication [Morain-Nicolier et al., 2007].

7.2.2.1 Contexte

La segmentation d’images médicales est un domaine particulièrement actif. De nombreuxproblèmes médicaux peuvent être (en partie) résolu à l’aide d’une segmentation précise. Parexemple, le recalage d’images peut nécessiter des volumes segmentés avec justesse[de Munck et al., 1998]. Il est de sorte primordial de valider la qualité de la segmentationobtenue par comparaison avec celle donnée par un expert. La carte des dissimilarités localespeut consituer un bon outil à ce sujet.

Dans le domaine du traitement de tumeurs cérébrales, une donnée importante est l’évo-lution de la tumeur suite à une thérapie. L’information de variation globale (variation devolume) est nécessaire mais non su⇥sante. Il peut être utile d’accéder à la localisation de cettevariation. La carte des dissimilarités locales semble alors adaptée.

Les images utilisées pour cette étude ont été obtenues par IRM. Une segmentation parSVM est extraite, par des collègues, des images de plusieurs modalités (T1, T2, FLAIR)[Ruan et al., 2007]. Un ensemble de neuf coupes (512� 512), constituant un volume, est ainsisegmenté. La figure 7.12 contient un exemple de la segmentation obtenue, en présentant unecoupe d’un volume, pour trois dates données.

95

t1 t2 t3

? ?

[Nicolier et al. 2007]

Page 30: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Tumor Evolution

30

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

(S1) Coupes de la segmentation 1

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

(S2) Coupes de la segmentation 2

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

(VDM) - coupes du volume des dissimilarités locales entre S1 et S2.

Figure 7.13 – Exemple de volume des dissimilarités locales. Les deux segmentations (S1 etS2) sont comparées. L’histogramme indique la répartition des distances (en échelle log en gris).Les distances, traduites par des niveaux de gris, sont exprimées en mm.

97

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

(S1) Coupes de la segmentation 1

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

(S2) Coupes de la segmentation 2

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

(VDM) - coupes du volume des dissimilarités locales entre S1 et S2.

Figure 7.13 – Exemple de volume des dissimilarités locales. Les deux segmentations (S1 etS2) sont comparées. L’histogramme indique la répartition des distances (en échelle log en gris).Les distances, traduites par des niveaux de gris, sont exprimées en mm.

97

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

(S1) Coupes de la segmentation 1

(a) (b) (c)Fig. 2. Segmented MRI.

values in (f) do not reflect the similarity in this case. Asa short conclusion, the LDMap is a useful tool for non-textured and well-defined images (or volumes). It is thususable for segmented slices.

III. RESULTSA. Segmentation MethodMRI images are acquired on a 1.5T GE (General

Electric Co.) machine using an axial 3D IR (InversionRecuperation) T1-weighted sequence, an axial FSE (FastSpin Echo) T2-weighted, an axial FSE PD-weighted se-quence and an axial FLAIR. For one examination, wehave 24 slices of the four signals with a voxel sizeof 0.47 ⇥ 0.47 ⇥ 5.5 mm3. All the slices and all theexaminations are registrated using SPM software.We use the first examination for training SVM using

RBF kernel [10]. The training set was obtained from oneslice by using mouse to choose ten pixels into the tumourand ten outside. We perform the first segmentation of thisvolume by using the SVM model obtained. So, we buildautomatically about one hundred points into the tumourand outside from all the tumoral slices. We retraining asecond SVM and use it for perform a second segmentationfor improve the first result. With this last SVM model,we perform the segmentation of others examinations. Ateach segmentation of examination, we use this 2 stepsprocess for improve the result. Fig 2 contains an exampleof the obtained segmentation. All the nine slices of twosegmented volumes are given in figs 3 and 4.

B. Implementation DetailsThe computation of the LDV is done with a new mageJ

plugin [6]. With a 2GHz opteron, the comparison of thetwo 512⇥ 512⇥ 9 volumes is done in 39 seconds.

C. ResultsThe LDV is computed between volumes 1 and 2. The

results are presented in fig 5. As the z-resolution isslightly greather than the x�y resolutions (with a ratio of11.7), the obtained distance depends only lightly on thez-axis information. Only when the obtained distance isgreather than 11.7mm, the z-information has been takeninto account. Fig 6 is a three-dimensional representationof the LDV.

(a) (b) (c)

(d) (e) (f)Fig. 3. Segmentation of the first volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

(a) (b) (c)

(d) (e) (f)Fig. 4. Segmentation of the second volume (slices from 18 to 23, a-f)using SVM with RBF kernel.

As a true distance is used to compute the LDV, thegiven scalar are true physical distance in mm. The givendistances histogram indicates there are much more lowdistances than high distances. This a coherent fact as non-zero LDV values are obtained in the intersection of the twovolumes. The intersection is locally filled with increasingvalues, starting from zero up to the maximum localdistance between the volumes. The maximum distanceis 15.56mm. This represent the higher straight distancebetween the two volumes.The proposed Local Distance Volume can be used to

track more precisely the variations between two volumes.The Hausdorff Distance in a window (eq. (3)) is defined asthe maximum of two directed distance. In the present casethe directed distances carry useful information. hW (A,B)carry the information on voxels present in vol. 1 and not invol. 2. Symmetrically hW (B,A) carry the information onvoxels present in vol. 2 and not in vol. 1. So hW (A,B)indicates where the tumor has regressed and hW (B,A)where the tumor has progressed. This is illustrated in fig.7 and fig. 8. The augmentation of the central occlusion is

(S2) Coupes de la segmentation 2

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

(VDM) - coupes du volume des dissimilarités locales entre S1 et S2.

Figure 7.13 – Exemple de volume des dissimilarités locales. Les deux segmentations (S1 etS2) sont comparées. L’histogramme indique la répartition des distances (en échelle log en gris).Les distances, traduites par des niveaux de gris, sont exprimées en mm.

97

Local Dissimilary Volume

t1

t2

(a) (b) (c)

(d) (e) (f)

(j)

Fig. 5. Slices 18 to 23 (a-f) of the Local Distance Volume betweenvolumes 1 (fig 3) and 2 (fig 4). The distances are absolute according toeq. (3). (j) is the distance histogram (logarithmic scale in gray) and thecolormap of images (a-f).

Fig. 6. A three-dimensional view of the LDV between volumes 1 and2.

clearly seen (by high negative distances).

IV. CONCLUSION

A distance measure between volumes has been pre-sented. Using local Hausdorff distances, the Local Dis-tance Volume (LDV) is computed with adaptative sizewindows. This method allows to indicates where thevolumes are similar. The LDV has been successfullyapplied on segmented MRI volumes containing a tumor.The evolution of the tumor between two acquisitions canbe quantified by the obtention of true physical distancesbetween volumes. Moreover, the method allow to trackwhere the tumor has regressed and where it has pro-gressed.

REFERENCES

[1] E. Baudrier, G. Millon, F. Nicolier, R. Seulin, S. Ruan, Hausdorffdistance based multiresolution maps applied to an image similaritymeasure, to appear in Imaging Science Journal.

(a) (b)Fig. 7. Using the directed Hausdorff Distance to show where thetumor has regressed (in white with negative distances) and where ishas progressed (in black with positive distances). (a) is the LDV withdirected distances (only slice 5 is presented corresponding to (b) in 3and 4). (b) is the histogram and the colormap of the directed LDV.

Fig. 8. Using the directed Hausdorff Distance to show where thetumor has regressed (in black with negative distances) and where ishas progressed (in white with positive distances).

[2] E. Baudrier, G. Millon, F. Nicolier, S. Ruan, A fast binary-image comparison method with local-dissimilarity quantification,International Conference on Pattern Recognition (ICPR’06), HongKong, vol. 3, 20-24 aug., 2006, pp 216-219.

[3] G. Borgefors, Distance transformations in digital images, Comput.Vision Graph. Image Process, vol. 34, no. 3, 1986, pp 344–371.

[4] J.C. De Munck et al., Registration of MR and SPECT withoutusing external fiducial markers, Phys. Med. Biol., vol. 43, no. 5,1998, pp 1255-1269.

[5] D. P. Huttenlocher, W. J. Rucklidge, Comparing Images Using theHausdorff Distance, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 9, 1993, pp 850–863.

[6] ImageJ, Image Processing and Analysis in Java,http://rsb.info.nih.gov/ij.

[7] O.K. Kwon, D.G. Sim, R.H. Park, Robust hausdorff distancematching algorithms using pyramidal structures, Pattern Recogni-tion, vol. 34, no. 10, 2001.

[8] Y. Lu, C. Tan, W. Huang, L. Fan, An approach to word imagematching based on weighted Hausdorff distance, Proc. 6th Internat.Conf. on Document Anal. Recogn., 2001, pp 921–925.

[9] J. Paumard, Robust comparison of binary images, Pattern Recog-nition Letters, vol. 18, no. 10, 1997, pp 1057–1063.

[10] S. Ruan, S. Lebonvallet, A. Merabet, J.M. Constans, Tumor Seg-mentation from a Multispectral MRI Images by Using SupportVector Machine Classification, to appear in ISBI’07, 2007, Wash-ington (USA).

[11] B. Takacs, Comparing faces using the modified Hausdorff distance,Pattern Recognition, vol. 31, no. 12, 1998, pp 1873–1881.

[12] C. Zhao, W. Shi, Y. Deng, A new Hausdorff distance for imagematching, Pattern Recognition Letters, vol. 26, 2004, pp 581–586.

Page 31: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Binary Pattern Localization

31

P

P I(x,y)

CDLP,I(x,y)

I(x,y)

x

y

I

Page 32: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Binary Pattern Localization

32

Local dissimilarities aggregation :

MDGI,P =X

k

X

l

CDLI,P (k, l)

CDLI,P = I.TDP + P.TDIwith

) DI,P = TD2I � P + I � TD2

P

Chamfer score [Borgefors, 1988]: how much I looks like à P?

sum of two oriented measures

Page 33: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Binary Pattern Localization : Example

33

Fig. 1. (a) A test image. – (b) a reference pattern, the ideal location in (a)is labeled with G.

that belong to b but not to a. A comparable idea can beformulated from (10) by weighting the contribution of eachterm (Generalized Quadratic Matcher - GQM):

AQMI,P = �dt2I ⇤ P + ⇥I ⇤ dt2P . (11)

A symmetric matcher is obtained with � = ⇥. Asymmetricones are obtained with � ⇥= ⇥, such as � = 1,⇥ = 0 (thechamfer matching) or � = 0,⇥ = 1.

IV. THE LDM-MATCHER CAN PROVIDE LESS FALSEPOSITIVES

The final LDM-matcher (eq. (11)) (symmetric one, with� = ⇥ = 1) is compared in this section to the Borgeforschamfer matcher (purely asymmetric, with � = 1 and ⇥ = 0).The behavior of the two matchers is compared in a concreteexample.

Coast image edges are given in figure 1. A test pattern to belocalized in (a) is given in (b). The ideal response of a matcheris marked by label G in 1(a). The Borgefors chamfer matchingresponse is given in figure 2(c) and the LDM-matcher responsein figure 2(d). As these two matchers are based on dissimilaritymeasures, good matches between the reference pattern and thelocal information in the image are indicated by low values(dark tones in the figures).

At first, a good localization is achieved by the two matchers.In images of figures 2(c) and 2(d), the absolute minimumvalue corresponds to coordinates x = 88 and y = 398 (x-axisis horizontal and y-axis is vertical). Theses positions matchperfectly the ideal position labeled G in fig. 1(a).

Positive responses (i.e. coordinates where an instance ofP can be found) are given by low values (possibly at localminima). For example, low values are obtained with Borgeforschamfer matching for pixels in the area labeled N in fig.1(a), in the large dark area in fig. 2(c). A decision based onlow values could conclude that the pattern P can be found

Fig. 2. In (c), image response by Borgefors chamfer matcher. – In (d), imageobtained with symmetric LDM-matcher. – A good match with the referencepattern is reported by low values (with dark gray levels).

in the area N. The LDM-matcher is more selective. For thearea-N pixels, high values are obtained by the LDM-matcher.The area-N values can be interpreted as false-positives forthe Borgefors chamfer matching and true-negatives for theLDM-matcher. The LDM-matcher can thus provide less falsepositives in a pattern matching task than the Borgefors chamfermatching.

Here is an interpretation of these observations. The pixeldensity in area-N is higher than in other locations of theimage. When the pattern template P is somewhere in area-N,there is a high probability that a subset of pixels matching thepattern exists, providing low values by the Borgefors chamfermatching. High values are given by the LDM-matcher as it isa symmetric matcher. Low values are only obtained when Pmatches the images and the image matches the pattern. Let’stake a drastic example : figure an area completely filled withpixels (ie an area filled with foreground pixels) and a pattern tobe matched. The Borgefors chamfer matching output responseswould be 0 as distance transform values (dtI in eq. 7) are zero.Intuitively each pixel of the pattern is corresponding to a pixelin this area, leading to a positive response of the Borgeforschamfer matching. For the LDM-matcher, all the values ofdtP are selected, leading to a very high score. The Borgeforschamfer matching values thus strongly depends on the imagepixels density.

V. THE LDM-MATCHER IS ROBUST

The LDM-matcher and Borgefors chamfer-matcher arecompared here with respect to noise robustness. How goodare the responses when a noisy pattern is given? Threeimages taken from a lab’s project are given in figure 3 [4].The dimensions of these images are 256 � 256. Patternsare extracted from these images. Patterns are 51 � 51 sub-images cropped from random positions in the full image. Eachpattern is then modified by inverting a given rate of its pixels.Finally the LDM-matcher and the Borgefors chamfer-matcherare applied. The estimated pattern position is obtained byfinding the global minimum in the image response of the

Chamfer score MDG

Page 34: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Binary Pattern Localization : Example

34

I

[Morain-Nicolier et al. 2009]

Page 35: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

35

?W`�Ô�!

�������

!""T��p u��F�vªï2yå-¼��#$%N$�Ë

�jÃ�ª/7"

!&'""°¸z�"N$/7O>`N$/��Ú®Ì(çЪ"¢�/7ª¬ª�hmN$/{

�çCªZ�=S�E=Seo�!�±ªÒ6Ã��æ¢>` "#$ N$�h�˪�GLb/7.��/�ELbÃ�ªkq���nÖ�KÞÕË㤥ᷪç

Ð}��"#$ ~�¬5�@��fP�ª�!Û¿Á{$~��|x4ªN$*�wôªÛ¿ÁN$e�pK¹ê/Þ��"#$ N$�ÅN$�D����H�#�Ë+ãÃ�ÑN�N %&' Ä.���Ë "#$ N$ª�HÑN�à��H/3¶",�ð�H�ÜHK¯�ð�H(')*�!Ï� "#$ N$*�Ý]ªÛ¿Á{$É8���MX�gñQspªïR;y

rÚªN$�R; Ø�/Þ��[fͪã/cµ�o»�ä�ë "#$ N$ªÙè�V9/7ªìp��>`N$�-±/7Û¿Á�*�­tìpª×ò�!�½ª\Ë����´Â%�à�´Â%�УÀÈ!Kܺ�{���Ù�

Ó�\ËD�/{½¿Á��Ù�©ÙKËÊ���Ù£ÀÈ!�{�Ç©Ùª�

Ð{/�ܺ�ËÊ��U&O)꪿Á���Gª¿ÁI�è�G�egñ²Q

ÉèªAgsp�G�AoO "#$N$��1eo�GªN$sp('%+*��Ù�Ðî

�O\Ë«j=S��\Ë+ã��_O��ÙÃ��^i���Y!�Ω¡�

�Ë��ó!»»�a� lé��ÆFv�¨¦�§J�'í�¥»Ë㤥(¾

d0�âß/�à�Ã�ª!µ�u�B<»�:� �­(ª¤¥Ö�(',-*(','*�N

%&.|³�\˪Ã�/j�!!

!/01!!

����#�!

������

!

!

"#$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"%$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"&$!

� ������ � � ������"������� � �!�"#$!��%!"%$!$�!"&$��%!'()*+,!-./!"0$!12!3*405!#+0(5!066,0+05%,!"#$!%7+7508!"%$!09(08!"&$!:0)(;;08!

!"0$!

����#�!

������

!

!

"#$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"%$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"&$!

� ������ � � ������"������� � �!�"#$!��%!"%$!$�!"&$��%!'()*+,!-./!"0$!12!3*405!#+0(5!066,0+05%,!"#$!%7+7508!"%$!09(08!"&$!:0)(;;08!

!"0$!

����#�!

������

!

!

"#$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"%$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"&$!

� ������ � � ������"������� � �!�"#$!��%!"%$!$�!"&$��%!'()*+,!-./!"0$!12!3*405!#+0(5!066,0+05%,!"#$!%7+7508!"%$!09(08!"&$!:0)(;;08!

!"0$!

Page 36: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

36

����#�!

������

!

!

"#$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"%$!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"&$!

� ������ � � ������"������� � �!�"#$!��%!"%$!$�!"&$��%!'()*+,!-./!"0$!12!3*405!#+0(5!066,0+05%,!"#$!%7+7508!"%$!09(08!"&$!:0)(;;08!

!"0$!

putamenputamen

caudate

thalamus

Page 37: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

37

>TZ Ó�!

�������

!

!

!

!

!

!

!

!

!"#$!

!

N %&'!X˲Á�µ$!"($!˾¿!!e���ËÜG)*+1¦C��°Øe �£ØC �ËÊ !"#$!.�ËÁ�µ$!!!ÄÍ�b¨�ÎÄÍ��ËÎÍ�ר��¼Í�Ë]!

,-./01!%&'!2/3(4!#0(-4!560/76/015!"($!#0(-4!6-55/1!!/8810!91:6;!(4!(<-(9!59-71!=:!2/3(4!#0(-4!-3(.1>!/8810!0-.26;!?2-61!3(6610>!#=66=3!91:6;!.0(@!3(6610>!#=66=3!0-.26;!7101#0=58-4(9!:9/-A!"#$!5=31!#0(-4!560/76/015!!

.0114;!7(/A(61>!#9/1B.0114;!62(9(3/5>!#9/1;!8/6(314>!=0(4.1B01A;!C1460-791!!

!"#$$¡cËÁ�06��$

�²=Z�à )*+ N'06����d06���0ÞªÝë²�à )*+N'ss:H�±�1¦�[*|d06��Ç�Ç7²d"�Æ���d06�

�Ðxfoi�²Öf�Bâ)�L��&9�ѹ�~��´·Ì806���§

3�*Ì806���vϲ�Æ���³5²��Ú²´·èQ�!��²´·³��*Ì806Ë-㲡cÁ��á�Á�q¨V��!¶a�

@ã0^¶�k²rKX�߯�»����IM«S�Þ3/��L��¡cËÁ

�²*Ì806��,��{x²æé�à����gÀ�tW´·¡cËÁ�0

6²¹�DEF'GDEFHGDEFIGDEF%GDEFFGDEH'GDEFJG�¸º���Ï�R�BAq��z�8Ûm¢

;²¹�DEF'GDEFHGDEF%GDEFKGDEFLGDEJMGDEJEGDEJ'GDEJHGDEJIG�$Y��©DEFLG宲�8q¨�Pê(2®Ò½çN'�²�w¤u4�ÂÑ�P�¥F`á�ÂÑ�P��ÔN'�

³�߯<ä�\¬³�06�O�jç­Ó�� ��©DEJ'G��Ò½ç�DÁ�²£l%yJq¨%y�u4DÁ�²£l�q¨ÈE0h�P�2®�P<ä\

¬�ÔN'²06�R��8Ûm²¹�,�Ýp²��³�q¨JÁ�%y²É

7�+Ϥ��nE­²ÂÑq¨J£l�P�2®�P?�_}�#³�߯��

�Õº¹���+�Ù�U�ê(�XËN':HtW²ßÅ�á�߯`£Ø�°

Atlas

+?D��]!

�������

��asm<p¤�4�r|%(V��8 "#$ �$���F´�i|���

8�8 "#$%&'�$�0­yuP��P¬#U'|K wl|%(�i��G8�µ

�9:z$|���_v§��i�@xW�K wl }J|u¡�h�¢T¦�

�°0³��eT�|{¡M�� ()* 8��� wl }J|u¡�huP[

¦��~k8��K wl ©¦|}Ju¡�h|�� t8��| wl�h«

c�ª%|®/�7oCi0��uP��P�gc1�± Iªb���8 "#$%+'

^­ynq¢���K wlRw|¢�8�!/)�8��K wl|¥�`

��8 "#$%,'^.R5|K wl|¢�8��.R^­yuP��P¬#5U'|

.Z�i�1��$�K wl.R5|Rw ,f|Rw°c¤A|LQ�²5;

uP��P6Rw��P|�3�y��t8��K wlRw %(U'|~k8

��K wlRw��V>)�%(�i¤B�-�8 "#$%-'���%(�i|�

�8C8 "#.X��!

!

!!

8 "#.�d]�rU'|%(�i���%/'!"w�8!%+'£w�8!%0'!�w�8!%&'!12�8!

34567,!"#.!8,59,:;/;4<:!7,=6>;=!<:!/!?@!A,45B;,&!C<>69,#!%/'!D<7<:/>!%+'!EF4/>!%0'!8/54;/>!%&'!12!C4,A!

!

d]­y��PY\H�r|%(V�¨�E¯������YkNy;=�8

�|%(Oy�G@H@IG@H"IG@1JIG@JHIG@KKI���PY\��yf�S2�h|�*%(�i

k#%(�i�°|®/�P��!��j�h|k#%(� "����h|�*%(�i�&��PY\|E�C��!

J ?LM*N

J ?LO3PO3L�

�!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!%"#@J'!

%0'%+'%/'! %&'!Test Volume

Deformation ?

Page 38: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

38

Optimal transformation :

T ⇤ = argminT2� {Esim(B,A � T ) + Ereg(T )}

similarity measure

regularization

Classic solution :Esim(B,A � T ) = 1

2kB �A � Tk2

Displacement field :u(p) =

(A � T �B)(p)

(A � T �B)2(p) + krB(p)k2rB(p)

Page 39: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

39

Shape constraint introduction :

�S(p) =

8><

>:

0 p ⇥ C

d(p, C) p ⇥ S

�d(p, C) p ⇥ ¬S

ARTICLE IN PRESS

other early work, such as [33], where a chamfer distance functionwas used for the improvement of sulcal-based registration.

Let FS : O-R! be a distance transform of a shape S, whichdefines a partition of the image domainO. Leto denote the regionthat is enclosed by S, and O"o denote the background region, theshape representation will be

FS#p$ %

0; pAS

d#p; S$; pAo"d#p; S$; pAO"o

8><

>:#6$

Where d#p; S$ refers to the minimum distance between imagepoint p and the shape S. Here Euclidean distance is used as thedistance metric. Fig. 1 is an example of the shape representation.Here Fig. 1(a) represents the shape of the putamen in one brainMRI slice and Fig. 1(b) is its corresponding distancerepresentation map using Euclidean distance. Darker theintensity is, farther away the distance from the edge of theputamen. This distance map allows one to get a deformationmeasure compared to the original shape. It can be easily provedthat the gradient vector of the shape distance map is in thenormal direction of the shape.

This representation provides supplementary shape informa-tion related to the intensity image that can be conveniently usedas a new similarity term

EshapeSSD #FS#A$;FS#A3T$$ % 12JFS#A3T$"FS#A$J2 #7$

Where FS#A$ is the shape representation of the structure in theatlas on the reference image A and FS#A3T$ is the shaperepresentation of the corresponding structure in the deformedatlas after the transformation T . Under the constraint of the shapesimilarity term, the optimal transform would lead to the finalsegmented structure shape as closer as that in the atlas. Thereforethe above overall cost function can be modified as

E% Esim#B;A3T$!Ereg#T$ % EintensitySSD #B;A3T$!EshapeSSD #FS#A$;FS#A3T$$!Ereg#T$

#8$

By extending the original Demons registration algorithm [25],an optimal solution can be obtained by the alternating strategy.The displacement vectors related to the intensity and the shape atthe point p of interest regions are

uintensity#p$ %"#A3T#p$"B#p$$

#A3T#p$"B#p$$2!JrB#p$J2rB#p$ #9$

ushape#p$ %"#FS#A3T#p$$"FS#A#p$$$

#FS#A3T#p$$"FS#A#p$$$2!JrFS#A#p$$J2rFS#A#p$$

#10$

The combined displacement vector is

u#p$ % #1"b$uintensity#p$!bushape#p$ #11$

Here the parameter bA &0;1' is used to balance the contribution ofthe intensity metric and the shape metric: b% 0 means a pureintensity contribution and b% 1 means a pure shape contribution.

A simple piecewise linear function is used to adjust adaptivelythe weight of the parameter, which is depicted in Fig. 2. Thesymbols used in Fig. 2 are as follows:

x: the intensity I of the target image in the region of interest.x0 %mean#Istructure$: the average intensity of the structureregion enclosed by the boundary of the deformed atlasstructure.x"0 % x0"lsstructure and x!

0 % x0!lsstructure: key intensity valueswhere the intensity metric and the shape metric have the sameimportance. Herein sstructure denotes the intensity standarddeviation of the structure region enclosed by the boundary ofthe deformed atlas structure. l is an empirical parameter tocontrol the dynamic intensity range in which the shape metricis more or less important than the intensity metric accordingto the intensity value near to or far from the intensity mean x0.

In our previous preliminary work [34], the structure of interestwas registered and segmented one by one sequentially. Suchprocedure was time consuming, especially for multi-structuressegmentation. Here an improvement is proposed. The newstrategy can register and segment multi-objects with very similarintensities in one time. Let si, i% 1; . . . ;N be N different structuresof interest. At any point p, the shape representation of si is FSi #p$.Then the N different shape representations are integrated into aunified shape map M#p$ according to the following principles:

M#p$ %

Fsi #p$ if Fsi #p$Z0

max#Fsi #p$$ if Fsi #p$o0 and jmax#Fsi #p$$jre0 if Fsi #p$o0 and jmax#Fsi #p$$j4e

8><

>:#12$

where e is the threshold, e%minifmaxp#Fsi #p$$g. Therefore, FS# d$in Eqs. (7), (8) and (10) should be replaced by M# d$ in theimplementation procedure.

3.2. Topology correction strategy

As is mentioned, topology preservation of a deformation fieldis important in registration-segmentation method for normalbrain case. Although bijectivity and smoothing techniquesadopted in optimizing the cost function like Eq. (4) or (8) can behelpful for preventing topology change, it is hard to be verified intheory. The topology preservation problem of the Demonsalgorithm has been investigated in recent years [35–37]. Somecases without topology preservation using this method have beenfound in some published reliable experiments [35] and ourexperiments. By optimizing the cost function over a space ofdiffeomorphism, a diffeomorphic Demons algorithm was pro-posed [36,37]. In this paper, a different simple topology correctionmethod is proposed based on the vector field analysis.

Let T % #X;Y ; Z$ denote the deformation field, where #X;Y ; Z$ isthe new position of point p #x; y; z$ after deformation. Then its

Fig. 1. An example of the shape representation (a) the putamen (b) its shapedistance representation map.

x0x 0x

+0x!

"1

0.5

Fig. 2. The function of the balance parameter b.

X. Lin et al. / Pattern Recognition 43 (2010) 2418–24272420

�S

ARTICLE IN PRESS

other early work, such as [33], where a chamfer distance functionwas used for the improvement of sulcal-based registration.

Let FS : O-R! be a distance transform of a shape S, whichdefines a partition of the image domainO. Leto denote the regionthat is enclosed by S, and O"o denote the background region, theshape representation will be

FS#p$ %

0; pAS

d#p; S$; pAo"d#p; S$; pAO"o

8><

>:#6$

Where d#p; S$ refers to the minimum distance between imagepoint p and the shape S. Here Euclidean distance is used as thedistance metric. Fig. 1 is an example of the shape representation.Here Fig. 1(a) represents the shape of the putamen in one brainMRI slice and Fig. 1(b) is its corresponding distancerepresentation map using Euclidean distance. Darker theintensity is, farther away the distance from the edge of theputamen. This distance map allows one to get a deformationmeasure compared to the original shape. It can be easily provedthat the gradient vector of the shape distance map is in thenormal direction of the shape.

This representation provides supplementary shape informa-tion related to the intensity image that can be conveniently usedas a new similarity term

EshapeSSD #FS#A$;FS#A3T$$ % 12JFS#A3T$"FS#A$J2 #7$

Where FS#A$ is the shape representation of the structure in theatlas on the reference image A and FS#A3T$ is the shaperepresentation of the corresponding structure in the deformedatlas after the transformation T . Under the constraint of the shapesimilarity term, the optimal transform would lead to the finalsegmented structure shape as closer as that in the atlas. Thereforethe above overall cost function can be modified as

E% Esim#B;A3T$!Ereg#T$ % EintensitySSD #B;A3T$!EshapeSSD #FS#A$;FS#A3T$$!Ereg#T$

#8$

By extending the original Demons registration algorithm [25],an optimal solution can be obtained by the alternating strategy.The displacement vectors related to the intensity and the shape atthe point p of interest regions are

uintensity#p$ %"#A3T#p$"B#p$$

#A3T#p$"B#p$$2!JrB#p$J2rB#p$ #9$

ushape#p$ %"#FS#A3T#p$$"FS#A#p$$$

#FS#A3T#p$$"FS#A#p$$$2!JrFS#A#p$$J2rFS#A#p$$

#10$

The combined displacement vector is

u#p$ % #1"b$uintensity#p$!bushape#p$ #11$

Here the parameter bA &0;1' is used to balance the contribution ofthe intensity metric and the shape metric: b% 0 means a pureintensity contribution and b% 1 means a pure shape contribution.

A simple piecewise linear function is used to adjust adaptivelythe weight of the parameter, which is depicted in Fig. 2. Thesymbols used in Fig. 2 are as follows:

x: the intensity I of the target image in the region of interest.x0 %mean#Istructure$: the average intensity of the structureregion enclosed by the boundary of the deformed atlasstructure.x"0 % x0"lsstructure and x!

0 % x0!lsstructure: key intensity valueswhere the intensity metric and the shape metric have the sameimportance. Herein sstructure denotes the intensity standarddeviation of the structure region enclosed by the boundary ofthe deformed atlas structure. l is an empirical parameter tocontrol the dynamic intensity range in which the shape metricis more or less important than the intensity metric accordingto the intensity value near to or far from the intensity mean x0.

In our previous preliminary work [34], the structure of interestwas registered and segmented one by one sequentially. Suchprocedure was time consuming, especially for multi-structuressegmentation. Here an improvement is proposed. The newstrategy can register and segment multi-objects with very similarintensities in one time. Let si, i% 1; . . . ;N be N different structuresof interest. At any point p, the shape representation of si is FSi #p$.Then the N different shape representations are integrated into aunified shape map M#p$ according to the following principles:

M#p$ %

Fsi #p$ if Fsi #p$Z0

max#Fsi #p$$ if Fsi #p$o0 and jmax#Fsi #p$$jre0 if Fsi #p$o0 and jmax#Fsi #p$$j4e

8><

>:#12$

where e is the threshold, e%minifmaxp#Fsi #p$$g. Therefore, FS# d$in Eqs. (7), (8) and (10) should be replaced by M# d$ in theimplementation procedure.

3.2. Topology correction strategy

As is mentioned, topology preservation of a deformation fieldis important in registration-segmentation method for normalbrain case. Although bijectivity and smoothing techniquesadopted in optimizing the cost function like Eq. (4) or (8) can behelpful for preventing topology change, it is hard to be verified intheory. The topology preservation problem of the Demonsalgorithm has been investigated in recent years [35–37]. Somecases without topology preservation using this method have beenfound in some published reliable experiments [35] and ourexperiments. By optimizing the cost function over a space ofdiffeomorphism, a diffeomorphic Demons algorithm was pro-posed [36,37]. In this paper, a different simple topology correctionmethod is proposed based on the vector field analysis.

Let T % #X;Y ; Z$ denote the deformation field, where #X;Y ; Z$ isthe new position of point p #x; y; z$ after deformation. Then its

Fig. 1. An example of the shape representation (a) the putamen (b) its shapedistance representation map.

x0x 0x

+0x!

"1

0.5

Fig. 2. The function of the balance parameter b.

X. Lin et al. / Pattern Recognition 43 (2010) 2418–24272420

S

Page 40: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

40

Shape cost function :

Eforme

sim

(A,A � T ) = 1

2k�A � �A�T k2

Displacement field :

u(p) = (1� �)uintensite

(p) + �uforme

(p)

withuintensite(p) =

(A � T �B)(p)

(A � T �B)2(p) + krB(p)k2rB(p)

uforme

(p) =(�A�T � �A)(p)

(�A�T � �A)2(p) + kr�A(p)k2r�A(p)

Page 41: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

41

!"#$%$&'($!)'*!&

+&,-&+&

./0123/.&45673/8&97:/.&18&6;/&12<=<87>&<86/8:<6?&97:/.&818+2<=<.&7>=12<6;3&:54/2<341:/.&9?&<6:&9158.72?@&%6&A78&

9/&://8&6;76&6;/&:;74/&376A;&9/6B//8&6;/&>/06&76>7:&45673/8&78.&<6:&./0123/.&C/2:<18&<:&816&C/2?&=11.@&D8./2&6;/&

E1<86&A18:627<8:&10&916;&6;/&<86/8:<6?&78.&6;/&:;74/F&6;/&2/0/2/8A/&:;74/&78.&6;/&672=/6&:;74/&B<>>&376A;&7:&35A;&7:&

41::<9>/@&!;/2/012/F&5:<8=&6;/&42141:/.&3/6;1.&B<6;&6;/&<86/=276/.&:;74/&3/62<AF&7&9/66/2&:/=3/8676<18&<:&1967<8/.F&

B;<A;&<:&:;1B/.&<8&G<=52/&H@&IJ0K@&

&

&&

G<=52/H@& I& $/=3/8676<18& 2/:5>6:L& A13472<:18& 9/6B//8& M/318:& 818+2<=<.& 2/=<:6276<18& 78.& 6;/& 42141:/.& 3/6;1.@& J7K& 6;/&

2/0/2/8A/&<37=/&:54/2<341:/.&9?&6;/&76>7:&10&:59A126<A7>&:625A652/:&J9K&6;/&:;74/&2/42/:/8676<18&10&6;/&>/06&45673/8&<8&6;/&76>7:&

:54/241:/.&9?&6;/&:625A652/N:&9158.72?&JAK&6;/&672=/6&<37=/&J.K&:/=3/8676<18&9?&452/&<86/8:<6?&97:/.&818+2<=<.&2/=<:6276<18&J/K&

6;/&:;74/&2/42/:/8676<18&10&6;/&./0123/.&>/06&45673/8&:54/241:/.&9?&6;/&:625A652/N:&9158.72?&J0K&6;/&:/=3/8676<18&97:/.&18&

6;/&42141:/.&3/6;1.@&

&

*1347276<C/& :65.</:& 9/6B//8& 6;/& 12<=<87>& M/318:& 3/6;1.& 78.& 6;/& 42141:/.& 3/6;1.& 72/& A722</.& 156& 9?&

:/=3/86<8=&927<8& <86/287>&:625A652/:&0213&7>>&C1>53/:@&)/:5>6&18&7& 6?4<A7>&C1>53/&<:&:;1B8&<8&G<=52/H@&,@&(16;&

6;2//&4>78/:&:54/241:/.&B<6;&6;/&:/=3/86/.&:59A126<A7>&:625A652/:&78.&6;/&0<87>&,M&C</B&72/&42/:/86/.@&

Figure 8.4 – Exemple de segmentation obtenue avec la contrainte de forme : (a) image réfé-rence (atlas) où les structures d’intérêt sont surlignées ; (b) représentation de forme du putamengauche de l’atlas ; (c) image à segmenter ; (d) segmentation obtenue avec la seule contrainted’intensité ; (e) représentation de forme du putamen déformé issu de (d) ; (f) segmentationobtenue en incluant la contrainte de forme.

114

Atlas

!"#$%$&'($!)'*!&

+&,-&+&

./0123/.&45673/8&97:/.&18&6;/&12<=<87>&<86/8:<6?&97:/.&818+2<=<.&7>=12<6;3&:54/2<341:/.&9?&<6:&9158.72?@&%6&A78&

9/&://8&6;76&6;/&:;74/&376A;&9/6B//8&6;/&>/06&76>7:&45673/8&78.&<6:&./0123/.&C/2:<18&<:&816&C/2?&=11.@&D8./2&6;/&

E1<86&A18:627<8:&10&916;&6;/&<86/8:<6?&78.&6;/&:;74/F&6;/&2/0/2/8A/&:;74/&78.&6;/&672=/6&:;74/&B<>>&376A;&7:&35A;&7:&

41::<9>/@&!;/2/012/F&5:<8=&6;/&42141:/.&3/6;1.&B<6;&6;/&<86/=276/.&:;74/&3/62<AF&7&9/66/2&:/=3/8676<18&<:&1967<8/.F&

B;<A;&<:&:;1B/.&<8&G<=52/&H@&IJ0K@&

&

&&

G<=52/H@& I& $/=3/8676<18& 2/:5>6:L& A13472<:18& 9/6B//8& M/318:& 818+2<=<.& 2/=<:6276<18& 78.& 6;/& 42141:/.& 3/6;1.@& J7K& 6;/&

2/0/2/8A/&<37=/&:54/2<341:/.&9?&6;/&76>7:&10&:59A126<A7>&:625A652/:&J9K&6;/&:;74/&2/42/:/8676<18&10&6;/&>/06&45673/8&<8&6;/&76>7:&

:54/241:/.&9?&6;/&:625A652/N:&9158.72?&JAK&6;/&672=/6&<37=/&J.K&:/=3/8676<18&9?&452/&<86/8:<6?&97:/.&818+2<=<.&2/=<:6276<18&J/K&

6;/&:;74/&2/42/:/8676<18&10&6;/&./0123/.&>/06&45673/8&:54/241:/.&9?&6;/&:625A652/N:&9158.72?&J0K&6;/&:/=3/8676<18&97:/.&18&

6;/&42141:/.&3/6;1.@&

&

*1347276<C/& :65.</:& 9/6B//8& 6;/& 12<=<87>& M/318:& 3/6;1.& 78.& 6;/& 42141:/.& 3/6;1.& 72/& A722</.& 156& 9?&

:/=3/86<8=&927<8& <86/287>&:625A652/:&0213&7>>&C1>53/:@&)/:5>6&18&7& 6?4<A7>&C1>53/&<:&:;1B8&<8&G<=52/H@&,@&(16;&

6;2//&4>78/:&:54/241:/.&B<6;&6;/&:/=3/86/.&:59A126<A7>&:625A652/:&78.&6;/&0<87>&,M&C</B&72/&42/:/86/.@&

Figure 8.4 – Exemple de segmentation obtenue avec la contrainte de forme : (a) image réfé-rence (atlas) où les structures d’intérêt sont surlignées ; (b) représentation de forme du putamengauche de l’atlas ; (c) image à segmenter ; (d) segmentation obtenue avec la seule contrainted’intensité ; (e) représentation de forme du putamen déformé issu de (d) ; (f) segmentationobtenue en incluant la contrainte de forme.

114

Volume

!"#$%$&'($!)'*!&

+&,-&+&

./0123/.&45673/8&97:/.&18&6;/&12<=<87>&<86/8:<6?&97:/.&818+2<=<.&7>=12<6;3&:54/2<341:/.&9?&<6:&9158.72?@&%6&A78&

9/&://8&6;76&6;/&:;74/&376A;&9/6B//8&6;/&>/06&76>7:&45673/8&78.&<6:&./0123/.&C/2:<18&<:&816&C/2?&=11.@&D8./2&6;/&

E1<86&A18:627<8:&10&916;&6;/&<86/8:<6?&78.&6;/&:;74/F&6;/&2/0/2/8A/&:;74/&78.&6;/&672=/6&:;74/&B<>>&376A;&7:&35A;&7:&

41::<9>/@&!;/2/012/F&5:<8=&6;/&42141:/.&3/6;1.&B<6;&6;/&<86/=276/.&:;74/&3/62<AF&7&9/66/2&:/=3/8676<18&<:&1967<8/.F&

B;<A;&<:&:;1B/.&<8&G<=52/&H@&IJ0K@&

&

&&

G<=52/H@& I& $/=3/8676<18& 2/:5>6:L& A13472<:18& 9/6B//8& M/318:& 818+2<=<.& 2/=<:6276<18& 78.& 6;/& 42141:/.& 3/6;1.@& J7K& 6;/&

2/0/2/8A/&<37=/&:54/2<341:/.&9?&6;/&76>7:&10&:59A126<A7>&:625A652/:&J9K&6;/&:;74/&2/42/:/8676<18&10&6;/&>/06&45673/8&<8&6;/&76>7:&

:54/241:/.&9?&6;/&:625A652/N:&9158.72?&JAK&6;/&672=/6&<37=/&J.K&:/=3/8676<18&9?&452/&<86/8:<6?&97:/.&818+2<=<.&2/=<:6276<18&J/K&

6;/&:;74/&2/42/:/8676<18&10&6;/&./0123/.&>/06&45673/8&:54/241:/.&9?&6;/&:625A652/N:&9158.72?&J0K&6;/&:/=3/8676<18&97:/.&18&

6;/&42141:/.&3/6;1.@&

&

*1347276<C/& :65.</:& 9/6B//8& 6;/& 12<=<87>& M/318:& 3/6;1.& 78.& 6;/& 42141:/.& 3/6;1.& 72/& A722</.& 156& 9?&

:/=3/86<8=&927<8& <86/287>&:625A652/:&0213&7>>&C1>53/:@&)/:5>6&18&7& 6?4<A7>&C1>53/&<:&:;1B8&<8&G<=52/H@&,@&(16;&

6;2//&4>78/:&:54/241:/.&B<6;&6;/&:/=3/86/.&:59A126<A7>&:625A652/:&78.&6;/&0<87>&,M&C</B&72/&42/:/86/.@&

Figure 8.4 – Exemple de segmentation obtenue avec la contrainte de forme : (a) image réfé-rence (atlas) où les structures d’intérêt sont surlignées ; (b) représentation de forme du putamengauche de l’atlas ; (c) image à segmenter ; (d) segmentation obtenue avec la seule contrainted’intensité ; (e) représentation de forme du putamen déformé issu de (d) ; (f) segmentationobtenue en incluant la contrainte de forme.

114

segmentation

!"#$%$&'($!)'*!&

+&,-&+&

./0123/.&45673/8&97:/.&18&6;/&12<=<87>&<86/8:<6?&97:/.&818+2<=<.&7>=12<6;3&:54/2<341:/.&9?&<6:&9158.72?@&%6&A78&

9/&://8&6;76&6;/&:;74/&376A;&9/6B//8&6;/&>/06&76>7:&45673/8&78.&<6:&./0123/.&C/2:<18&<:&816&C/2?&=11.@&D8./2&6;/&

E1<86&A18:627<8:&10&916;&6;/&<86/8:<6?&78.&6;/&:;74/F&6;/&2/0/2/8A/&:;74/&78.&6;/&672=/6&:;74/&B<>>&376A;&7:&35A;&7:&

41::<9>/@&!;/2/012/F&5:<8=&6;/&42141:/.&3/6;1.&B<6;&6;/&<86/=276/.&:;74/&3/62<AF&7&9/66/2&:/=3/8676<18&<:&1967<8/.F&

B;<A;&<:&:;1B/.&<8&G<=52/&H@&IJ0K@&

&

&&

G<=52/H@& I& $/=3/8676<18& 2/:5>6:L& A13472<:18& 9/6B//8& M/318:& 818+2<=<.& 2/=<:6276<18& 78.& 6;/& 42141:/.& 3/6;1.@& J7K& 6;/&

2/0/2/8A/&<37=/&:54/2<341:/.&9?&6;/&76>7:&10&:59A126<A7>&:625A652/:&J9K&6;/&:;74/&2/42/:/8676<18&10&6;/&>/06&45673/8&<8&6;/&76>7:&

:54/241:/.&9?&6;/&:625A652/N:&9158.72?&JAK&6;/&672=/6&<37=/&J.K&:/=3/8676<18&9?&452/&<86/8:<6?&97:/.&818+2<=<.&2/=<:6276<18&J/K&

6;/&:;74/&2/42/:/8676<18&10&6;/&./0123/.&>/06&45673/8&:54/241:/.&9?&6;/&:625A652/N:&9158.72?&J0K&6;/&:/=3/8676<18&97:/.&18&

6;/&42141:/.&3/6;1.@&

&

*1347276<C/& :65.</:& 9/6B//8& 6;/& 12<=<87>& M/318:& 3/6;1.& 78.& 6;/& 42141:/.& 3/6;1.& 72/& A722</.& 156& 9?&

:/=3/86<8=&927<8& <86/287>&:625A652/:&0213&7>>&C1>53/:@&)/:5>6&18&7& 6?4<A7>&C1>53/&<:&:;1B8&<8&G<=52/H@&,@&(16;&

6;2//&4>78/:&:54/241:/.&B<6;&6;/&:/=3/86/.&:59A126<A7>&:625A652/:&78.&6;/&0<87>&,M&C</B&72/&42/:/86/.@&

Figure 8.4 – Exemple de segmentation obtenue avec la contrainte de forme : (a) image réfé-rence (atlas) où les structures d’intérêt sont surlignées ; (b) représentation de forme du putamengauche de l’atlas ; (c) image à segmenter ; (d) segmentation obtenue avec la seule contrainted’intensité ; (e) représentation de forme du putamen déformé issu de (d) ; (f) segmentationobtenue en incluant la contrainte de forme.

114

segmentation

!"#$%$&'($!)'*!&

+&,-&+&

./0123/.&45673/8&97:/.&18&6;/&12<=<87>&<86/8:<6?&97:/.&818+2<=<.&7>=12<6;3&:54/2<341:/.&9?&<6:&9158.72?@&%6&A78&

9/&://8&6;76&6;/&:;74/&376A;&9/6B//8&6;/&>/06&76>7:&45673/8&78.&<6:&./0123/.&C/2:<18&<:&816&C/2?&=11.@&D8./2&6;/&

E1<86&A18:627<8:&10&916;&6;/&<86/8:<6?&78.&6;/&:;74/F&6;/&2/0/2/8A/&:;74/&78.&6;/&672=/6&:;74/&B<>>&376A;&7:&35A;&7:&

41::<9>/@&!;/2/012/F&5:<8=&6;/&42141:/.&3/6;1.&B<6;&6;/&<86/=276/.&:;74/&3/62<AF&7&9/66/2&:/=3/8676<18&<:&1967<8/.F&

B;<A;&<:&:;1B/.&<8&G<=52/&H@&IJ0K@&

&

&&

G<=52/H@& I& $/=3/8676<18& 2/:5>6:L& A13472<:18& 9/6B//8& M/318:& 818+2<=<.& 2/=<:6276<18& 78.& 6;/& 42141:/.& 3/6;1.@& J7K& 6;/&

2/0/2/8A/&<37=/&:54/2<341:/.&9?&6;/&76>7:&10&:59A126<A7>&:625A652/:&J9K&6;/&:;74/&2/42/:/8676<18&10&6;/&>/06&45673/8&<8&6;/&76>7:&

:54/241:/.&9?&6;/&:625A652/N:&9158.72?&JAK&6;/&672=/6&<37=/&J.K&:/=3/8676<18&9?&452/&<86/8:<6?&97:/.&818+2<=<.&2/=<:6276<18&J/K&

6;/&:;74/&2/42/:/8676<18&10&6;/&./0123/.&>/06&45673/8&:54/241:/.&9?&6;/&:625A652/N:&9158.72?&J0K&6;/&:/=3/8676<18&97:/.&18&

6;/&42141:/.&3/6;1.@&

&

*1347276<C/& :65.</:& 9/6B//8& 6;/& 12<=<87>& M/318:& 3/6;1.& 78.& 6;/& 42141:/.& 3/6;1.& 72/& A722</.& 156& 9?&

:/=3/86<8=&927<8& <86/287>&:625A652/:&0213&7>>&C1>53/:@&)/:5>6&18&7& 6?4<A7>&C1>53/&<:&:;1B8&<8&G<=52/H@&,@&(16;&

6;2//&4>78/:&:54/241:/.&B<6;&6;/&:/=3/86/.&:59A126<A7>&:625A652/:&78.&6;/&0<87>&,M&C</B&72/&42/:/86/.@&

Figure 8.4 – Exemple de segmentation obtenue avec la contrainte de forme : (a) image réfé-rence (atlas) où les structures d’intérêt sont surlignées ; (b) représentation de forme du putamengauche de l’atlas ; (c) image à segmenter ; (d) segmentation obtenue avec la seule contrainted’intensité ; (e) représentation de forme du putamen déformé issu de (d) ; (f) segmentationobtenue en incluant la contrainte de forme.

114

putamen shape

!"#$%$&'($!)'*!&

+&,-&+&

./0123/.&45673/8&97:/.&18&6;/&12<=<87>&<86/8:<6?&97:/.&818+2<=<.&7>=12<6;3&:54/2<341:/.&9?&<6:&9158.72?@&%6&A78&

9/&://8&6;76&6;/&:;74/&376A;&9/6B//8&6;/&>/06&76>7:&45673/8&78.&<6:&./0123/.&C/2:<18&<:&816&C/2?&=11.@&D8./2&6;/&

E1<86&A18:627<8:&10&916;&6;/&<86/8:<6?&78.&6;/&:;74/F&6;/&2/0/2/8A/&:;74/&78.&6;/&672=/6&:;74/&B<>>&376A;&7:&35A;&7:&

41::<9>/@&!;/2/012/F&5:<8=&6;/&42141:/.&3/6;1.&B<6;&6;/&<86/=276/.&:;74/&3/62<AF&7&9/66/2&:/=3/8676<18&<:&1967<8/.F&

B;<A;&<:&:;1B/.&<8&G<=52/&H@&IJ0K@&

&

&&

G<=52/H@& I& $/=3/8676<18& 2/:5>6:L& A13472<:18& 9/6B//8& M/318:& 818+2<=<.& 2/=<:6276<18& 78.& 6;/& 42141:/.& 3/6;1.@& J7K& 6;/&

2/0/2/8A/&<37=/&:54/2<341:/.&9?&6;/&76>7:&10&:59A126<A7>&:625A652/:&J9K&6;/&:;74/&2/42/:/8676<18&10&6;/&>/06&45673/8&<8&6;/&76>7:&

:54/241:/.&9?&6;/&:625A652/N:&9158.72?&JAK&6;/&672=/6&<37=/&J.K&:/=3/8676<18&9?&452/&<86/8:<6?&97:/.&818+2<=<.&2/=<:6276<18&J/K&

6;/&:;74/&2/42/:/8676<18&10&6;/&./0123/.&>/06&45673/8&:54/241:/.&9?&6;/&:625A652/N:&9158.72?&J0K&6;/&:/=3/8676<18&97:/.&18&

6;/&42141:/.&3/6;1.@&

&

*1347276<C/& :65.</:& 9/6B//8& 6;/& 12<=<87>& M/318:& 3/6;1.& 78.& 6;/& 42141:/.& 3/6;1.& 72/& A722</.& 156& 9?&

:/=3/86<8=&927<8& <86/287>&:625A652/:&0213&7>>&C1>53/:@&)/:5>6&18&7& 6?4<A7>&C1>53/&<:&:;1B8&<8&G<=52/H@&,@&(16;&

6;2//&4>78/:&:54/241:/.&B<6;&6;/&:/=3/86/.&:59A126<A7>&:625A652/:&78.&6;/&0<87>&,M&C</B&72/&42/:/86/.@&

Figure 8.4 – Exemple de segmentation obtenue avec la contrainte de forme : (a) image réfé-rence (atlas) où les structures d’intérêt sont surlignées ; (b) représentation de forme du putamengauche de l’atlas ; (c) image à segmenter ; (d) segmentation obtenue avec la seule contrainted’intensité ; (e) représentation de forme du putamen déformé issu de (d) ; (f) segmentationobtenue en incluant la contrainte de forme.

114

putamen shape

Page 42: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Brain Internal Structures Segmentation

42

Structure Caudate g. Caudate d. Putamen g. Putamen d. Thalam g. Thalam d.Méthode avec sans avec sans avec sans avec sans avec sans avec sans

max 0,812 0,729 0,813 0,691 0,831 0,752 0,829 0,759 0,857 0,806 0,850 0,788min 0,539 0,537 0,567 0,571 0,694 0,633 0,743 0,666 0,735 0,680 0,747 0,665moy. 0,739 0,699 0,717 0,658 0,767 0,725 0,783 0,739 0,809 0,740 0,801 0,730

écart-type 0,072 0,049 0,062 0,031 0,037 0,029 0,031 0,027 0,031 0,032 0,030 0,032

Table 8.2 – Comparaison quantitative des segmentations obtenues avec et sans contrainte de forme (ligne « Méthode ») pourplusieurs structures. Les valeurs minimale, maximale et moyenne de l’indice � sont données. En gras sont indiquées les meilleursrésultats.

116

Shape constraint contribution (15 segmentations) :

=2⇥VP

2⇥VP+ FN+ FP

[Lin PhD - Pattern Recognition 2010]

Page 43: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local Similarities,Future Directions

43

Page 44: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Local Dissimilarity

44

CDLA,B(p) = |A�B|max (TDA,TDB)

Distance transform generalizations :

dGWD(a, b) =1

2(I(a) + I(b))⇥ ||a� b||

dWDOCS(a, b) =q

(I(a) + I(b))2 + ||a� b||2

Foreground / background distinction needed

CDLSSIM

Distance transforms

[Morain-Nicolier et al. 2009]

Page 45: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Local Dissimilarity

45

CDLA,B(p) = |A�B|max (TDA,TDB)

Hypograph [Molchanov, 2003] :(other solutions are available)It = {(p, t) ⇤ R2 �R : I(p) ⇥ t}

TD�I =

1

b� a

bX

i=a

TDIi�i

No foreground / background

Distance transforms

[Work in Progress - Morain-Nicolier / Bouchot]

Page 46: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Loc. Dissim. : application

46

18FDG PET Images

Un positon en se désintégrant émet deux photons gamma de 511KeV en directions opposées.

Pour détecter les photons, nous plaçons le patient dans une couronne de détecteurs

élémentaires. Chaque détecteur élémentaire est composé d’un cristal et d’un

photomultiplicateur ou d’un réseau de petits cristaux et de photomultiplicateurs. Arrivé dans

le cristal, le photon gamma primaire interagit en émettant des photons lumineux. Nous

collectons au niveau des photomultiplicateurs la lumière résultante. La quantité de lumière est

proportionnelle à la quantité d’énergie du rayonnement. Les rayonnements lumineux doivent

décroître très rapidement dans le cristal [6]. Le principe de la coïncidence repose sur le

principe suivant : nous mesurons un événement de 511KeV sur un détecteur et nous regardons

tous les détecteurs en opposition pour voir si un autre évènement arrive dans un temps très

court (temps de coïncidence ou fenêtre temporelle). Dans ce cas, nous savons qu’un photon a

été émis quelque part sur la ligne entre ces deux détecteurs. Pour connaître l’origine de ce

photon, il faut un cristal dense pour arrêter les photons de 511KeV, mesurer l’énergie de ce

rayonnement de façon précise et mesurer le temps de coïncidence très rapidement (de l’ordre

de 6 à 15ns). [3]

2. Outils d’analyse de séquences d’images fonctionnelles

La radiothérapie et le suivi des patients pendant et après traitement sont l’unes des

applications bénéficiant des séquences d’images fonctionnelles. Typiquement, pour un

examen, nous pouvons avoir plusieurs centaines d’images. Les outils d’analyse sont alors

nécessaires pour extraire efficacement l’information pertinente des images sous forme

synthétique et faciliter son stockage.

Figure 6. Quantité énorme de données pour un seul examen

x

y z

Volume d’images

t

Figure. Séquence d’images dynamiques TEP

t1 t2 t3

t4 t5 t6

t7 t8 t9

t10 t11

Figure. Séquence d’images dynamiques TEP

t1 t2 t3

t4 t5 t6

t7 t8 t9

t10 t11

Page 47: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Loc. Dissim. : application

47

18FDG PET Images

0

1000

2000

3000

4000

5000

6000

7000

8000

1 2 3 4 5 6 7 8 9 10 11

Tumeur

Aorte

Page 48: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Local Dissim. : application

48

Figure. Résultat de la segmentation

Figure. Résultat de la carte de la dissimilarité LDM between mean images Segmentation

[Ketata PhD - in progress]

Page 49: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Local Dissimilarity

49

Other internal measures

Cependant le principe de la mesure locale ne repose pas sur les propriétés de la DH. Il estpossible de définir une mesure locale à partir d’une mesure quelconque sous réserve qu’ellevérifie certaines propriétés [Baudrier, 2005].

9.1.2.1 Carte des dissimilarités locales généralisée

Théorème 9.2. Soient deux images binaires A et B et �W (A,B) une mesure de dissimilaritélocale, dans une fenêtre W , qui vérifie les propriétés suivantes :

�W (A,B) = 0⇧⌃ A W = B W (9.9)

�K > 0,�W (A,B) � K, (9.10)

W1 ⇤W2 ⌃ �W1(A,B) � �W2(A,B), (9.11)

il est alors possible de définir un critère pour fixer la mesure locale des dissimilarités.

L’interprétation des propriétés est la suivante :

(9.9) : si la valeur de la mesure locale est nulle, alors les extraits des deux images sontidentiques ;

(9.10) : les valeurs de la mesure locale sont majorées par une valeur indépendante de lafenêtre ;

(9.11) : lorsque la fenêtre croît, le mesure locale croît.

De nombreux critères di�érents sont envisageables et dépendent de la mesure choisie. Dans lecas de la distance de Hausdor�, le critère est : si la mesure dans la fenêtre est maximale etque la valeur maximale K n’est pas atteinte, alors la fenêtre est agrandie. Ce critère convientbien pour une distance max-min. Il peut être assoupli pour des mesures qui atteignent moinsfacilement la valeur maximale. Un exemple de critère possible est le suivant.

Corollaire 9.3. Soit un paramètre � ⌥ [0, 1], si la mesure dans la fenêtre est supérieure à�M , où M est la valeur maximale possible dans la fenêtre et que la valeur maximale K n’estpas atteinte, alors la fenêtre est agrandie.

Algorithme Pour chaque pixel p, faire

1. n⌅ k (initialisation de la taille de la fenêtre)

2. tant que �W (p,n)(A,B) ⇥ �M , fairen⌅ n+ 1

3. CDLA,B(p) = �W (p,n�1)(A,B)

128

Cependant le principe de la mesure locale ne repose pas sur les propriétés de la DH. Il estpossible de définir une mesure locale à partir d’une mesure quelconque sous réserve qu’ellevérifie certaines propriétés [Baudrier, 2005].

9.1.2.1 Carte des dissimilarités locales généralisée

Théorème 9.2. Soient deux images binaires A et B et �W (A,B) une mesure de dissimilaritélocale, dans une fenêtre W , qui vérifie les propriétés suivantes :

�W (A,B) = 0⇧⌃ A W = B W (9.9)

�K > 0,�W (A,B) � K, (9.10)

W1 ⇤W2 ⌃ �W1(A,B) � �W2(A,B), (9.11)

il est alors possible de définir un critère pour fixer la mesure locale des dissimilarités.

L’interprétation des propriétés est la suivante :

(9.9) : si la valeur de la mesure locale est nulle, alors les extraits des deux images sontidentiques ;

(9.10) : les valeurs de la mesure locale sont majorées par une valeur indépendante de lafenêtre ;

(9.11) : lorsque la fenêtre croît, le mesure locale croît.

De nombreux critères di�érents sont envisageables et dépendent de la mesure choisie. Dans lecas de la distance de Hausdor�, le critère est : si la mesure dans la fenêtre est maximale etque la valeur maximale K n’est pas atteinte, alors la fenêtre est agrandie. Ce critère convientbien pour une distance max-min. Il peut être assoupli pour des mesures qui atteignent moinsfacilement la valeur maximale. Un exemple de critère possible est le suivant.

Corollaire 9.3. Soit un paramètre � ⌥ [0, 1], si la mesure dans la fenêtre est supérieure à�M , où M est la valeur maximale possible dans la fenêtre et que la valeur maximale K n’estpas atteinte, alors la fenêtre est agrandie.

Algorithme Pour chaque pixel p, faire

1. n⌅ k (initialisation de la taille de la fenêtre)

2. tant que �W (p,n)(A,B) ⇥ �M , fairen⌅ n+ 1

3. CDLA,B(p) = �W (p,n�1)(A,B)

128

Compatible with pixel to pixel diffs (e.g. PSNR)

Page 50: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Gray-Level Local Dissimilarity

50

other internal representations

DH(A,B) = max

✓maxp2A

d(p,A),maxp2B

d(p,B)

Image = set of pixels

Image = set of graphical elements

• Segmentations

• Sparse representation

• SIFT

• internal distance needed = distance between two elements (x, y, small image/

Page 51: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarities

51

a work in progress ...

Page 52: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarities,Why ?

52

Page 53: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Binary Pattern Localization

53

P

P I(x,y)

CDLP,I(x,y)

I(x,y)

x

y

I

Page 54: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Binary Pattern Localization

54

Local dissimilarites aggregation :

MDGI,P =X

k

X

l

CDLI,P (k, l)

CDLI,P = I.TDP + P.TDIwith

) DI,P = TD2I � P + I � TD2

P

Chamfer score [Borgefors, 1988]: how much I looks like à P?

sum of two oriented measures

Page 55: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarity

55

similarity = distance ?

d(x, y) = 0 , x = y

d(x, y) = d(y, x)

d(x, z) d(x, y) + d(y, z)

Identity

Symmetry

Triangular inequality

Some classical similarities :

• Minkowski

• Jeffrey divergence

• !2

• Levenstein distance

• Earth Mover Distance

• Hausdorff distance

• ...

Page 56: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarity : Psychological Aspects

• Recognition task :

P(A|A) < P(A|B)

⇒ d(A,A) > d(A,B) !

• auto-similarity

= prototypality

≈ complexity, number of attributes, ...

56

d(x, y) = 0 , x = y

d(x, y) = d(y, x)

d(x, z) d(x, y) + d(y, z)

Identity

Symmetry

Triangular inequality

Page 57: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

57

d(x, y) = 0 , x = y

d(x, y) = d(y, x)

d(x, z) d(x, y) + d(y, z)

Identity

Symmetry

Triangular inequality Prototypality ≈ «Good shape»

[Tversky 1977]⇒ symmetric, regular, stable, harmonious, homogeneous, ...[Gestalt]

Non-Metric Similarity : Psychological Aspects

Page 58: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

58

d(x, y) = 0 , x = y

d(x, y) = d(y, x)

d(x, z) d(x, y) + d(y, z)

Identity

Symmetry

Triangular inequalityB. Pitt looks like my brother

My brother looks like B. Pitt

Non-Metric Similarity : Psychological Aspects

(B. Pitt is more prototypical)

Page 59: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

59

d(x, y) = 0 , x = y

d(x, y) = d(y, x)

d(x, z) d(x, y) + d(y, z)

Identity

Symmetry

Triangular inequalityFigure 9.8 – Un exemple de violation de la transitivité de la ressemblance.

9.2.4.3 Non-vérification de l’inégalité triangulaire

Trouver des exemples de violation de l’inégalité triangulaire semble plus di�cile[Ashby et Perrin, 1988]. Cette inégalité à tester, si D est une mesure de dissemblance (in-verse d’une ressemblance) est

D(A,B) +D(B,C) � D(A,C). (9.26)

Divers exemples de la littérature sont plus une violation de la propriété de transitivitéqu’une violation de l’inégalité triangulaire. Ainsi l’exemple donné en figure 9.8 tiré de[Veltkamp et Hagedoorn, 2000] est emblématique. Selon les auteurs, la distance de l’homme aucentaure, ainsi que celle du centaure au cheval sont faibles, cependant la distance de l’hommeau cheval est grande, ce qui viole l’inégalité triangulaire.

Le raisonnement n’est pas valable. Si l’on peut a�rmer que l’homme ressemble au cen-taure et que le centaure ressemble au cheval, on ne peut a�rmer que l’homme ressemble aucheval. Ceci est une violation de la transitivité et non de l’inégalité triangulaire. Le raisonne-ment de l’auteur ne peut être suivi car l’a�rmation que la distance de l’homme au centaureest faible est assez arbitraire. Formellement, on peut admettre que D(homme, centaure) <

D(homme, cheval) ainsi que D(centaure, cheval) < D(homme, cheval), la déduction dansce cas est que D(homme, centaure) + D(centaure, cheval) < 2D(homme, cheval) et nonD(homme, centaure) + D(centaure, cheval) < D(homme, cheval) qui serait une violationde l’inégalité triangulaire.

Les psychologues estiment qu’il est di�cile de tester cette inégalité en partie en raisond’une distinction entre la « similarité » comme relation perceptuelle entre deux stimuli, etla « similarité » comme jugement cognitif. La tendance est tout de même de suspecter uneviolation de cette inégalité [Ashby et Perrin, 1988].

9.2.5 D’autres modèles issus de la psychologie

Le modèle de contraste de Tversky part d’un constat de la violation des propriétés définis-sant une distance mathématique et est une proposition de mesure. Mais ce n’est évidemment

138

[Veltkamp et Hagedoorn, 2000]

• Transitity not verified• Suspicions on triangular inequality [Ashby et Perrin, 1988]➡ 4 inequalities [Tversky] : «corner inequality»

Non-Metric Similarity : Psychological Aspects

Page 60: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarities,How ?

60

Page 61: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarity : How ?

61

Psychology inspired measures

S(A,B) = f(A \B)� �f(A�B)� ⇥f(B �A) [Tversky]

D(A,B) = F [d(A,B) + r(A) + c(B)] [Nosofsky]

A connection with Local Dissimilarities :

A.TD(B)

B.TD(A)

A B

CRL(A,B) = ABmax(TD¬A,TD¬B)� �A.TDB � ⇥B.TDA

Page 62: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Non-Metric Similarities,What for ?

62

Page 63: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

NM Similarity : A Practical Motivation

63

Let’s do some chimistry :«similar structure» ≈ «similar» properties

tives including moderately active compounds (see Table 3).These numbers were very similar but not identical to those re-ported by Chen and Brown, which we attribute to the use ofslightly different (updated) versions of NCI. We then calculatedfingerprint bit densities for active and inactive NCI compounds(Table 3). For MACCS and TGD, on average five more bits wereset on in active than in inactive compounds, which rationalizesthe preference for a values greater than 0.5 and explains theslight asymmetry observed in the hit-rate maps of Chen andBrown that were produced under variation of a.[8]

Similarity coefficients other than the Tversky coefficient areknown to have varying degrees of size dependence.[5,10,11] Forexample, some coefficients preferentially detect compounds ofdifferent size and differences in fingerprint bit densities aregenerally found to affect compound retrieval[10] . Furthermore,the Tanimoto coefficient has different preferences for molecu-lar size ranges in similarity calculations (selection on the basisof high values) and diversity calculations (low values), whichcan be attributed to size-dependent differences in fingerprintbit densities.[11] The study of Chen and Brown went beyondthe analysis of characteristics of symmetrical similarity coeffi-cients and provided evidence for improved compound recallwhen Tversky similarity calculations were carried out in anasymmetrical manner. We have been able to demonstrate thatthe apparent preference for asymmetrical Tversky coefficientsis a direct consequence of systematic differences in fingerprintbit density between reference and database compounds.

For bit densities that correlate with molecular size, we needto distinguish three principal cases for the assessment of Tver-sky similarity. 1) If active compounds are on average largerthan database compounds, a>0.5 produces the highest hitrates. 2) If active compounds are smaller than database com-pounds (such as for class NNI in our analysis), a<0.5 gives thehighest hit rates. 3) If there are no differences in bit densitiesand size, Tversky similarity calculations are independent of thea parameter and always symmetrical. Are there consequencesfor similarity searching? Sets of active molecules available forsimilarity search calculations are typically optimized leads ordrug candidates taken from the scientific or patent literature.These compounds tend to be larger than average databasemolecules. It is therefore not surprising that many activityclasses used in benchmark calculations produce high hit rates

Figure 4. Statistical properties of distribution overlap. The overlap OV be-tween intraclass and interclass Tversky similarity value distributions is shownas a function of the a parameter. The representation is according toFigure 2.

Table 2. Optimal a values.[a]

MACCS TGD PDR-FP

BEN 0.6 0.5 –CAT 0.6 0.7 –HH2 0.8 0.6 –NNI 0.2 0.1 –TNF 0.6 0.8 –

[a] a values producing minimal overlap between intraclass and class-NCITversky similarity value distributions are shown as determined by graphi-cal analysis of Figure 4. PDR-FP calculations are independent of a valuesbecause of its constant bit density. Therefore the overlap is also constant(see Figure 4c).

ChemMedChem 2007, 2, 1037 – 1042 ! 2007 Wiley-VCH Verlag GmbH&Co. KGaA, Weinheim www.chemmedchem.org 1041

Assymmetric Fingerprint Similarity Searching

[Wang et al., 2007]

compounds) according to their similarity values s(probe,target)calculated by Equation (1), from the most to least similar. Thea parameter in Equation (1) was systematically adjusted from 0to 1 with a small increment of 0.01. As a result, the average hitrate with the change of both the a value and the number ofthe nearest neighbors is demonstrated in Figures 2 and 3 forthe NCI anti-AIDS and J&J corporate databases, respectively.The hit rate was calculated as n/m, for n active compoundspresent among the top m nearest neighbors. This is depictedin the color scale, with red representing a high hit rate and theblue representing low.

In Figures 2 and 3, all the hit-rate maps clearly show anasymmetric color pattern with the red region inclined to theright side, indicating that greater a values are generally morefavored for higher hit rates. In other words, Tversky measureswith a higher weight on the probe compound generally out-perform those with a higher weight on the target compoundin terms of the ability to retrieve active analogues. This canserve as evidence of the presence of asymmetry in chemicalsimilarity measures. The observed asymmetry can be interpret-ed by the directionality or inequality that is inherent in achemical similarity search: the probe compounds used in prac-tice are always active, whereas the target compounds can beeither active or inactive so that their unique structural descrip-tors may not be as important as those of the probe com-pounds regarding their contribution to biological activity.

Figures 2 and 3 also clearly indicate that the degree of asym-metry is positively correlated to the degree of similarity itself,in general. In other words, the red region becomes ever moreinclined to the right side with an increasing number of nearestneighbors. Therefore, to retrieve more remotely similar activecompounds, a more asymmetric similarity measure should beused, that is, more weight should be put on the probe com-pound. On the other hand, to retrieve more highly similaractive compounds, approximately equal weights should be put

on both the probe and target compounds, leading to a mea-sure close to symmetric.

An additional observation from Figures 2 and 3 is that themaximum hit rates generally appear between a values of 0.5and 1.0, not at a=1.0 as claimed by Blankley and Wild.[5c] Thisindicates that a weight scheme totally biased on probe com-pounds usually will not lead to optimal performance. Despitethe diminished importance of the unique structural descriptorsof target compounds, they do have the potential to positivelycontribute to biological activity. Totally ignoring their contribu-tion may lead to suboptimal performance.

The implication of the findings presented herein is that therelative weights on probe and target compounds can be ad-justed to more effectively achieve different purposes of a simi-larity search. One goal we frequently pursue in drug discoveryis to find compounds that are highly similar to the known ac-tives. For example, these actives may come from HTS screen-ing or serendipity, and we want to quickly identify and testtheir analogues to confirm the series and hopefully find somepreliminary SARs. Then, highly symmetric similarity measuresshould be the choice for this purpose, based on the results ofthis study. Another goal we often seek is to identify remotelysimilar analogues. For example, when we follow the lead struc-tures reported by our competitors, we may wish to identifysome new structures that are similar to these lead structuresso that they will have better chances of being active as well,yet not so similar that they fall under protection of the com-petitors’ patents. In this case, we now know highly asymmetricsimilarity measures should be adopted.

Asymmetry can be easily introduced into many other popu-lar similarity measures, such as that recently proposed byFlinger et al.[15] We expect to see more studies on this conceptand more applications of the related techniques in computer-assisted drug discovery in the future.

Figure 2. Hit-rate maps for the NCI anti-AIDS database using a) atom pairs, b) Daylight fingerprints, and c) MACCS keys as structural descriptors. Average hitrate is represented in the color scale shown.

ChemMedChem 2007, 2, 180 – 182 ! 2007 Wiley-VCH Verlag GmbH&Co. KGaA, Weinheim www.chemmedchem.org 181[Chen et al., 2007]

S(A,B) = f(A \B)� �f(A�B)� ⇥f(B �A)

Page 64: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

Local and Non-Metric Similarities

64

some conclusions

Page 65: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

65

Where is the information ?

• A good solution with Local Dissimilarity Map

• Global -> localorLocal -> global ?

• Open problem• What is the local intrinsic scale of

information in an image ?

Local Similarity

Page 66: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

66

• Many problems are intrinsically asymmetric !• Stereovision : point correspondence

(occlusions)• Database request (CBIR)• Pattern matching :

�����

Non-Metric Similarity

Page 67: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

67

• Solutions• Psychological high level models

(simple rule : do not take into account only common information or missing information, but mix them - with orientation)

• De-symmetrizing some measures(e.g. scalar product, pixel to pixel diff)

• Using naturally asymmetric measures(e.g. conditional probability, ...)

Non-Metric Similarity

Page 68: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

68

• Open problems• How to classify with such measures ?

(SVM ? KNN ?)• How to make an efficient search [T. Skopal

works] ?

Non-Metric Similarity

Page 69: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

69

• High level LDM (pixel -> graphical element) ?

• Semantic gap ?(working on similarity measures and not on descriptors)

Local Non-Metric Similarities

Page 70: 2012.09.25 - Local and non-metric similarities between images - why, how and what for ?

70

[email protected]

http://pixel-shaker.fr(papers, discussions and reference

implementation)

Local Non-Metric Similarity