EdgeDetectioninColorImagesBasedonDSmTJeanDezertZhun-gaLiuGrgoireMercierAbstractIn
this paper, we present a non-supervised method-ology for edge
detection in color images based on belief functionsand their
combination. Our algorithm is based on the fusion oflocal edge
detectors results expressed into basic belief assignmentsthanks to
a exible modeling, and the proportional conict
redis-tributionruledevelopedinDSmTframework. Theapplicationof this
new belief-based edge detector is tested both on
original(noise-free) Lenaspictureandonamodiedimageincludingarticial
pixel noises toshowthe abilityof our algorithmtowork on noisy
images too.Keywords: Edge detection, image processing, DSmT,
DST,fusion, belief functions.I. INTRODUCTIONEdgedetectionis oneof
most important tasks
inimageprocessinganditsapplicationtocolorimagesisstillsubjecttoaverystronginterest
[8], [10][12], [14]forexampleinteledetection, inremotesensing,
target recognition, medicaldiagnosis, computer visionandrobotics,
etc. Most of basicimageprocessingalgorithmsdevelopedinthepast
forgray-scale images have been extended to multichannel images.
Edgedetection algorithms for color images have been classied
intothreemainfamilies[15]: 1)fusionmethods, 2)multidimen-sional
gradient methods and 3) vector methods depending
onthepositionofwheretherecombinationstepapplies[7]. Inthis paper,
the method we propose uses a fusion method withamultidimensional
gradient method. Our newunsupervisededge detector combines the
results obtained by gray-scaleedge detectors for individual color
channels [3] to denebbas fromthe gradient values which are combined
usingDezert-Smarandache Theory[17] (DSmT) of plausible
andparadoxical reasoning for information fusion. DSmT has
beenprovedtobeaseriousalternativetowell-knownDempster-Shafer
Theoryof mathematical evidence [16]
speciallyfordealingwithhighlyconictingsources of evidences.
Somesupervised edge detectors based on belief functions
computedfromgaussianpdfassumptionsandDempster-ShaferTheorycan be
found in [1], [21]. In this work, we show through verysimple
examples how edge detection can be performed basedon DSmTfusion
techniques with belief functions withoutlearning (supervision). The
interest for using belief functionsfor edge detection comes from
their ability to model more ade-quately uncertainties with respect
to the classical probabilisticmodelingapproach, andtodeal
withconictinginformationduetospatial changesintheimageornoises.
Thispaperisorganized as follows: In section 2 we briey recall the
basicsof DSmT and the fusion rule we use. In section 3, we
presentin details our new edge detector based on belief functions
andtheir fusion. Results of our new algorithm tested on the
originalLenaspictureanditsnoisyversionarepresentedinsection4withacomparisontotheclassical
Cannysedgedetector.Conclusions and perspectives are given in
section 5.II. BASICS OF
DSMTThepurposeofDSmT[17]istoovercomethelimitationsof DST[16]
mainlybyproposingnewunderlyingmodelsfor the frames of discernment
in order to t better withthe nature of real problems, and proposing
newefcientcombination and conditioning rules. In DSmT framework,
theelements i, i =1, 2, . . . , nof a given frame are
notnecessarily exclusive, and there is no restriction on i but
theirexhaustivity. Thehyper-power set DinDSmT,
thehyper-powersetisdenedasthesetofallcompositepropositionsbuilt
from elements of with operators and . For instance,if = {1, 2},
thenD= {, 1, 2, 1 2, 1 2}. A(generalized) basic belief assignment
(bba for short) is denedas the mappingm:D[0, 1]. The generalized
belief andplausibilityfunctionsaredenedinalmostthesamemanneras in
DST. More precisely, from a general frame , we deneamapm(.) : D[0,
1] associatedtoagivenbodyofevidence Basm() = 0 and
ADm(A) = 1 (1)The quantity m(A) is called the generalized basic
beliefassignment/mass (or just bba for short)
ofA.Thegeneralizedcredibilityandplausibilityfunctionsarede-ned in
almost the same manner as within DST, i.e.Bel(A) =
BABDm(B) and Pl(A) =
BA=BDm(B) (2)Two models1(the free model and hybrid model) in
DSmTcan be used to dene the bbas to combine. In the freeDSmmodel,
thesourcesof evidencearecombinedwithouttaking into account
integrity constraints. When the free DSmmodel does not
holdbecausethetruenatureof thefusionproblem under consideration, we
can take into account someknown integrity constraints and dene bbas
to combine usingthe proper hybridDSmmodel. All details of
DSmTwith1Actually, Shafersmodel, consideringall
elementsoftheframeastrulyexclusive, can be viewed as a special case
of hybrid model.Originally published as Dezert J., Liu Z., Mercier
G., Edge Detection in Color Images Based on DSmT, in Proc. Of
Fusion 2011 Conf., Chicago, July, 2011, and reprinted with
permission.Advances and Applications of DSmT for Information
Fusion. Collected Works. Volume
4343manyexamplescanbeeasilyfoundin[17]availablefreelyontheweb.
Inthispaper, wewill
workonlywithShafersmodeloftheframewhereallelementsiofareassumedtrulyexhaustive
andexclusive (disjoint) andtherefore Dreduces the the classical
power set2and generalized belieffunctions reduces to classical ones
as within DST framework.Aside offering the possibility to work with
different underlyingmodels (not only Shafers model as within DST),
DSmT offersalsonewefcient combinationrules
basedonproportionalconict redistribution (PCR rules no 5 and no 6)
for
combininghighlyconictingsourcesofevidence.InDSmTframework,theclassicalpignistictransformationBetP(.)isreplacedbythe
by the more effective DSmP(.) transformation to estimatethe
subjective probabilities of hypotheses for decision-makingsupport
once the combinationof bbas has beenobtained.Before presenting our
new edge detector, we just recall brieywhat are the PCR5 fusion
rule and the DSmP transformation.All details, justications with
examples on PCR5 and DSmPcanbefoundfreelyfromthewebin[17], Vols.
2&3andwill not be reported here.A. PCR5 fusion ruleThe
Proportional Conict Redistribution Rule no. 5 (PCR5)is used
generally to combine bbas in DSmT framework. PCR5transfers the
conicting mass only to the elements involved
intheconictandproportionallytotheirindividualmasses, sothat the
specicity of the information is entirely preserved inthis fusion
process. Let m1(.) and m2(.) be two independent2bbas, then the PCR5
rule is dened as follows (see [17], Vol.2for full
justicationandexamples): mPCR5() =0andX 2\ {}mPCR5(X) =
X1,X22X1X2=Xm1(X1)m2(X2)+
X22X2X=[m1(X)2m2(X2)m1(X) +m2(X2)+m2(X)2m1(X2)m2(X) +m1(X2)]
(3)whereall denominators in(3) aredifferent fromzero. If
adenominator is zero, that fraction is discarded.
AdditionalpropertiesofPCR5canbefoundin[5]. ExtensionofPCR5for
combining qualitative bbas can be found in [17], Vol. 2 &3. All
propositions/sets are in a canonical form. A variant ofPCR5, called
PCR6 has been proposed by Martin and Osswaldin[17], Vol. 2, for
combinings >2sources. Thegeneralformulasfor
PCR5andPCR6rulesaregivenin[17], Vol.2also.
PCR6coincideswithPCR5whenonecombinestwosources.ThedifferencebetweenPCR5andPCR6liesintheway
the proportional conict redistribution is done as soon asthree or
more sources are involved in the fusion. For example,lets consider
three sources with bbas m1(.), m2(.) and m3(.),A B= for the model
of the frame, andm1(A) = 0.6,m2(B) =0.3, m3(B) =0.1.
WithPCR5thepartial con-ictingmassm1(A)m2(B)m3(B)=0.6 0.3
0.1=0.0182I.e. each source provides its bba independently of the
other sources.is redistributedbacktoAandBonlywithrespect
tothefollowingproportions respectively: xPCR5A=0.01714andxPCR5B=
0.00086 because the proportionalization
requiresxPCR5Am1(A)=xPCR5Bm2(B)m3(B)=m1(A)m2(B)m3(B)m1(A)
+m2(B)m3(B)that isxPCR5A0.6=xPCR5B0.03=0.0180.6 + 0.03
0.02857thus
xPCR5A= 0.60 0.02857 0.01714xPCR5B= 0.03 0.02857 0.00086With the
PCR6 fusion rule, the partial conicting massm1(A)m2(B)m3(B) = 0.6
0.3 0.1 = 0.018 is redistributedback to A and B only with respect
to the following proportionsrespectively: xPCR6A= 0.0108 andxPCR6B=
0.0072 becausethe PCR6 proportionalization is done as
follows:xPCR6Am1(A)=xPCR6B,2m2(B)=xPCR6B,3m3(B)=m1(A)m2(B)m3(B)m1(A)
+m2(B) +m3(B)that isxPCR6A0.6=xPCR6B,20.3=xPCR6B,30.1=0.0180.6 +
0.3 + 0.1= 0.018thus xPCR6A= 0.6 0.018 = 0.0108xPCR6B,2= 0.3 0.018
= 0.0054xPCR6B,3= 0.1 0.018 = 0.0018and therefore with PCR6, one
gets nally the followingredistributions toA andB:
xPCR6A= 0.0108xPCR6B= xPCR6B,2+xPCR6B,3= 0.0054 + 0.0018 =
0.0072Fromtheimplementationpoint ofview, PCR6issimplertoimplement
than PCR5. Very basic Matlab codes for PCR5 andPCR6 fusion rules
can be found in [17], [18].B. DSmP transformationDSmP probabilistic
transformation is a serious alternative tothe classical pignistic
transformation which allows to increasethe probabilistic
information content (PIC), i.e. to reduceShannonentropy,
oftheapproximatedsubjectiveprobabilitymeasuredrawnfromanybba.
JusticationandcomparisonsofDSmP(.) w.r.t.BetP(.) and to other
transformations canbe found in details in [6], [17], Vol. 3, Chap.
3. DSmPtrans-formation is dened3byDSmP
() = 0 and X 2\ {}DSmP
(X) =
Y 2
ZXY|Z|=1m(Z) + |X Y |
ZY|Z|=1m(Z) + |Y |m(Y ) (4)3Here we work on classical power-set,
but DSmP can be dened also forworkingwithother fusionspaces,
hyper-power setsor super-power setsifnecessary.Advances and
Applications of DSmT for Information Fusion. Collected Works.
Volume 4344where |XY | and |Y | denote the cardinals of the sets
XYandYrespectively; 0 is a small number which allows
toincreasethePICvalueoftheapproximationof
m(.)intoasubjectiveprobabilitymeasure. Usually =0, but
insomeparticular degenerate cases, when the DSmP=0(.) valuescannot
be derived, theDSmP>0values can however
alwaysbederivedbychoosingasaverysmall
positivenumber,say=1/1000forexampleinordertobeascloseaswewant to
the highest value of the PIC. The smaller , thebetter/bigger
PICvalue one gets. When =1andwhenthe masses of all elements
Zhaving|Z| = 1 are
zero,DSmP=1(.)=BetP(.),wherethepignistictransformationBetP(.) is
dened by [19]:BetP{X} =
Y 2|Y X||Y |m(Y ) (5)with convention ||/|| = 1.C. DS combination
ruleDempster-Shafer (DS) rule of combination is the mainhistorical
(and still widely used) rule proposed by GlennShafer inhis
milestonebook[16].
Verypassionatedebateshaveemergedintheliteratureaboutthejusticationandthebehavior
of this rulefromthefamous Zadehs criticismin[22]. Wedont
plantoreopenthis endless debateandjustwant torecall brieyherehowit
ismathematicallydened.Lets consider a given discrete and nite frame
of discernment= {1, 2, . . . , n} of exclusive and exhaustive
hypotheses(a.k.asatisfyingShafersmodel)andtwoindependent bbasm1(.)
andm2(.) dened on2, then DS rule of combinationis dened bymDS() = 0
and X = andX 2:mDS(X) =11 K12
X1,X22X1X2=Xm1(X1)m2(X2) (6)where K12
X1,X22X1X2=m1(X1)m2(X2) represents thetotal conict
betweensources. If K12=1, thesources ofevidenceareinfull conict
andDSrulecannot beapplied.DS rule is commutative and associative
and can be extented forthe fusion ofs > 2 sources as well. The
main criticism
aboutsuchsuchconcernsitsunexpected/counter-intuitivebehaviorassoonasthedegreeof
conict betweensourcesbecomeshigh(see[17], Vol.1, Chapter
5andreferences thereinfordetails and examples).D. Decision-making
supportDecisions are achieved by computing the expected utilitiesof
the acts using either the subjective/pignistic BetP{.} (usu-ally
adopted in DST framework) orDSmP(.) (as suggestedinDSmTframework)
astheprobabilityfunctionneededtocompute expectations. Usually, one
uses the maximum of thepignisticprobabilityasdecisioncriterion.
ThemaximumofBetP{.}is oftenconsideredas a prudent
bettingdecisioncriterionbetweenthetwoother
decisionstrategies(maxofplausibility or max. of credibility which
appears to be respec-tively too optimistic or too pessimistic). It
is easy to show thatBetP{.}isindeedaprobabilityfunction(see[19],
[20])aswell asDSmP(.)(see[17], Vol.2). Themaxof DSmP(.)is
considered as more efcient for practical applications sinceDSmP(.)
is more informative (it has a higher PIC value) thanBetP(.)
transformation.III. EDGE DETECTION BASED ON DSMT AND FUSIONIn this
work, we use the most common RGB (Red-Green-Blue) representationof
thedigital color imagewhereeachlayer(channel)R,
GandBconsistsinamatrixof ni njpixels. The discrete value of each
pixel in a given color channelis assumed in a given absolute
interval of color intensity[cmin, cmax].
TheprincipleofournewEdgedetectorbasedon DSmT is very simple and
consists in the following steps:A. Step 1: Construction of
bbasLetsconsideragivenchannel
(colorlayer)anddenoteitasLwhichcanrepresenteithertheRed(R)colorlayer,
theGreen (G) color layer or the Blue (B) color layer, or any
otherchannel in a more general case for multispectral images.
Forsimplicity, we focus our work and presentation here on
colorimages only.Apply an edge detector algorithm for each color
channelLtoget foreachpixel xLij, i =1, 2, . . . , ni, j =1, 2, . .
. , njanassociatedbbamLij(.)expressingthelocalbeliefthatthispixel
belongs or not to an edge. The frame of discernmentused to dene the
bbas is very simple and is dened as = {1 Pixel Edge, 2 Pixel /
Edge} (7)isassumedtosatisfyShafersmodel (i.e. 1 2= ).It isclear
that many(binary) edgedetectionalgorithmsareavailable in the image
processing literature but here we wanta smooth algorithm able to
provide both the belief of eachpixel to belong or not to an edge
and also the uncertainty onehas on the classication of this pixel.
In the this subsection, wepresent a very simple algorithm for
accomplishing this task atthe color channel level. Obviously the
quality of the algorithmusedinthisrst stepwill haveastrongimpact
ofthenalresult and therefore it is important to focus research
efforts onthe development of efcient algorithms for realizing this
stepas best as possible.AsinSobel method[9], two3
3kernelsareconvolvedwiththe original image ALfor eachlayer
Ltocalculateapproximations of the derivatives - one for horizontal
changes,andone for vertical. We thenobtaintwogradient
imagesGLxandGLyfor eachlayer Lrepresent thehorizontal
andverticalderivativeapproximationsforeachpixel xLij.
Thex-coordinate is dened as increasing in the right-direction,
andthey-coordinateis as increasinginthedown-direction. Ateachpixel
xLijofthecolorlayer L, thegradient
magnitudegLijcanbeestimatedbythecombinationofthetwogradientapproximations
as:gLij=12(GLx(i, j)2+GLy(i, j)2)1/2(8)Advances and Applications of
DSmT for Information Fusion. Collected Works. Volume
4345whereGLx=181 0 12 0 21 0 1 AL;GLy=181 2 10 0 01 2 1 AL;and
where denotes the 2-dimensional convolution operation.In Sobels
detection method, the edge detection for a pixelxijof a gray image
is declared based on a hard thresholdingofgijvalue.
SuchSobeldetectorissensitivetonoiseanditcangeneratefalsealarms.
Inthiswork,
gLijvaluesareusedonlytodenethemassfunction(bba)ofeachpixelineachlayer
over the power-set of dened in (7). If the valuegLijvalue of a
pixel is big, it implies that this pixel is more likelyto belong to
an edge. IfgLijvalue of the pixelxLijis low thenourbeliefthat it
belongstoanedgemust belowtoo.
Suchverysimpleandintuitivemodelingcanbeobtaineddirectlyfromthe
sigmoid functions commonly used as activationfunctioninneural
networks, orasfuzzymembershipinthefuzzy subsets theory as explained
below.Lets consider the sigmoid function dened asf,t(g) 11
+e(gt)(9)gis the gradient magnitude of the pixel under
consideration.t is the abscissa of the inection point of the
sigmoid whichcanbeselectedbyt =p max(g)wherepisaproportionparameter
and is the scalar product operator. When workingwith noisy images,p
always increases with the level of noise. is the slope of the
tangent at the inection point.It canbeeasilyveriedthat
thebbamLij(.|gLij)satisfyingtheexpectedbehaviorcanbeobtainedbythefusion4ofthetwo
following simple bbas dened by:focal element m1(.) m2(.)1f,te(g)
020 f,tn(g)1 21 f,te(g) 1 f,tn(g)with0 < tn< te< 255, >
0.teis thelower thresholdfor theedgedetection, andtnisthe upper
threshold for the non edge detection. Thus, [tn, te]corresponds to
our uncertainty decision zone and the gLij valueslying in this
interval correspond to the unknown decision state.The bounds
(thresholds)tnandtecan be tuned based on theaverage gradients
values of the image, and the lengthtetndependsonthelevel of
thenoise. If thetheimageisverynoisy, it means the informationis
veryuncertain, andthelengthof theinterval [tn, te] canbecomelarge.
Otherwise,it issmall. Becauseof structureof
thesetwosimplebbas,4with DS, PCR5 or even with DSmH rule [17].the
fusion obtained with PCR5, DS of even with DSm hybrid(DSmH) rules
of combination provide globally similar resultsand therefore the
choice of the fusion rule here does not reallymatter to
buildmLij(.|gLij) as shown on the gures 1-3. PCR5,which is the most
specic fusion rule (it reduces the level
ofbeliefcommittedtotheuncertainty),
isusedinthisworktogeneratemLij(.|gLij).Figure 1. Computation of
mLij(.|gLij) from m1(.) and m2(.) with [tn, te] =[60, 100] and =
0.09.Figure 2. Computation of mLij(.|gLij) from m1(.) and m2(.)
with [tn, te] =[50, 80] and and = 0.06.Figure 3. Computation of
mLij(.|gLij) from m1(.) and m2(.) with [tn, te] =[30, 40] and =
0.04.Advances and Applications of DSmT for Information Fusion.
Collected Works. Volume 4346In summary, mLij(.|gLij) can be easily
constructed from thechoiceof thresholdingparameters te,
tndeningtheuncer-taintyzoneofthegradient values, theslopeparameter
ofsigmoids, and of course from the gradient magnitude gLij.
Thisapproach is very easy to implement and very exible since
itdepends on the parameters which are totally under the controlof
the user.B. Step 2: Fusion of bbasmLij(.)Many combination rules
like DS rule, Dubois & Prade ruleYagersrule,
andsooncanbeusedwithour approach. Inthis work, we just make
investigations based on the two
mostwell-knownrules(DSandPCR5ruleproposedinDSTandDSmTrespectively).
Soweuseeither DSor PCR5ruletocombinethethreebbasmRij(.),
mGij(.)andmBij(.)foreachpixel xijinorder toget theglobal bba
mij(.)toestimatethe degree of belief of the belonging ofxijto an
edge in thegiven image. Since PCR5 is not associative, we must
apply thegeneral PCR5 formula for combining the 3 sources
(channels)altogether5asexplainedindetailsin[17], Vol.2, Chap.
1&2. A suboptimal approach requiring less computations
wouldconsist in applying a PCR5 sequential fusion of these bbas
insuch a way that the two least conicting bbas are combined atrst
by PCR5 and then combine again with PCR5 the
resultingbbaswiththethirdoneaccordingto(3). ThemoresimplePCR6 rule
could also be used instead of PCR5 as well - see[17], Vol. 2.C.
Step 3: Decision-makingTheoutput of step2istheset of Ni Njbbas
mij(.)associatedtoeachpixel xijoftheimageinthewholecolorspace
(R,G,B). mij(.) commits some degree of belief to1Pixel Edge,
to2Pixel / Edgeandalsototheuncertainty 1 2. The binary
decision-making processconsists in declaring if the pixel xijunder
considerationbelongs or not to an edge fromthe bba mij(.), or in
amorecomplicatedmanner frommij(.)andthebbasof itsneighbours. In
this paper, we just recall the principal methodsbased on the use
ofmij(.).Based on mij(.) only, howto decide 1or 2? Manyapproaches
have been proposed in the literature for
answeringthisquestionwhenworkingwithan-Dframe. Thepes-simistic
approach consists in declaring the hypothesisi which has the
maximum of credibility, whereas the
optimisticapproachconsistsindeclaringthehypothesiswhichhasthemaximumofplausibility.
Whenthecardinalityoftheframeisgreater thantwo,
thesetwoapproachescanyieldtoadifferent nal decision. In our
particular application and sinceour framehasonlytwoelements, thenal
decisionwillbethesameifweusethemaxofcredibilityorthemaxofplausibility
criterion. Other decision-making methods suggest,as
agoodbalancebetweenaforementionedpessimisticandoptimisticapproaches,
toapproximatethebbaat rst intoa5i.e. a generalization of the PCR5
formula described in section
II-A.subjectiveprobabilitymeasurefromasuitableprobabilistictransformation,
andthentochoosetheelement of whichhasthehighest probability.
Inpractice, onesuggeststotakeasnal decisiontheargument ofthemaxof
BetP(.)orofthe max of DSmP(.). In our binary frame case however
thesetwoapproachesalsoprovidethesamenaldecisionaswiththemaxof
credibilityapproach. ThiscanbeeasilyprovedfromBetP(.)or
DSmP(.)formulas. Indeed, letsconsiderm(1) > m(2) > 0 withm(1)
+ m(2) + m(1 2) = 1(which means that1is taken as nal decision
because it hasahigher credibilitythan2), thenonegets as
approximatesubjective probabilities:BetP(1) = m(1) +m(1 2)/2 m(1)
+KBetP(2) = m(2) +m(1 2)/2 m(2) +KDSmP(1) = m(1)[1 +m(1 2)m(1)
+m(2)] m(1)[1 +K
]DSmP(2) = m(2)[1 +m(1 2)m(1) +m(2)] m(2)[1 +K
]where Kand K
are two positive constants. Fromtheseexpressions,
oneseesthatifm(1)>m(2)>0,
thenalsoBetP(1)>BetP(2)andDSmP(1)>DSmP(2)andthusthenal
decisionbasedonmaxof BetP(.)ormaxofDSmP(.) is nally the same. Note
that when m(1) = m(2),no rational decision can be drawn fromm(.)
and only arandom decision procedure or ad-hoc method can be used
insuch particular case.Insummary, one sees that whenworkingwitha
binaryframe,allcommondecision-makingstrategiesprovidethesamenal
decisionandthereforethereisnointerest
touseacomplexdecision-makingprocedureinthatcaseandthatswhywecanadopt
herethemaxofbeliefasnal decision-making criterion in our
simulations. Note that aside the naldecision and because we
havem(12), we are able (if wewant)alsotoplot thelevel
ofuncertaintyrelatedwithsuchdecision (not presented in this
paper).IV. SIMULATIONS RESULTSIn this section we present the
results of our
newedgedetectionalgorithmtestedontwocolorimagesfordifferentparameter
settings.A. Test on original Lenas
pictureLenaSoderbergpictureisoneofthemostusedimagefortestingimageprocessingalgorithmsintheliterature[4]andthereforeweproposetotest
ouralgorithmonthisreferenceimage. This imagecanbefoundas part of
theUSCSIPIImageDatabaseintheirmiscellaneouscollectionavailableat
http://sipi.usc.edu/database/index.php. Theoriginal Lenaspicture
scan is shown on Fig. 4-(a). The gure 5-(a)(c) showsthe edge
detection on each channel (layer) based on the bbasmLij(.|gLij) in
section III-A. One sees that the edges in differentchannels are
different, and the task of our proposed algorithmAdvances and
Applications of DSmT for Information Fusion. Collected Works.
Volume 4347Figure 4. Lenas picture before and after noiseFigure 5.
Edge detections in each channel.Figure 6. Cannys edge detector on
Lenas gray image.Figure 7. Sobels edge detector on Lenas gray
image.Figure 8. DS-based edge detector on Lenas color image.Figure
9. PCR5 edge detector on Lenas color image.is to combine efciently
the underlying bbasmLij(.|gLij) gen-erating the subgures
5-(a)(c).Sobel [9] and Canny [2] edge detectors are commonly usedin
image processing community and thats why we
makecomparisonofournewedgedetectorw.r.t. CannysandSo-bels
approaches. Canny and Sobel edge detectors are
applieddirectlytothegrayimageconvertedfromtheoriginal Lenacolor
image Fig. 4-(a). The gures 69 show the results of thedifferent
edge detectors on Lenas picture. In our simulations,we took=0.06,
andtgdened ast=p max(g) in eachlayer, was taken with pn= 0.17 and
pe= 0.19, correspondingtogradient thresholds[tRn, tRe ] =[15, 17],
[tGn, tGe ] =[13, 14]and[tBn, tBe ] =[11, 13]. Themaxof
credibility, plausibility,DSmPorBetPfor decision-making to generate
nal resultprovidethesamedecisionasexplainedinthesectionIII-Cwhich
is normal in this binary frame case.One sees that nally on the
clean (noise-free) Lenas picture,our edge detector provides close
performances to Sobelsdetector applied on Lenas grey image. Cannys
detector seemstoprovide a better abilitytodetect some edges
inLenaspicture than our method, but it also generates much more
falseAdvances and Applications of DSmT for Information Fusion.
Collected Works. Volume 4348alarms too. It is worth noting that the
results provided by
DS-basedorPCR5-basededgedetectorsshowacoarselocationof the edges.
Soit is quite difcult todrawna clear
andfairconclusionbetweentheseedgedetectorssinceit
highlydependsonwhat wewant, i.e. thereductionoffalsealarmsor the
reduction of miss-detections.B. Test on Lenas picture with
noiseInthissimulation, weshowhowouredgedetectorworksonanoisyimage.
Samplingof independent GaussiannoiseN(0, 2) is added to each pixels
of each layer of the originalLenaspictureasseenonFig. 4-(b).
Inthepresentedsim-ulation,
2=1100whichcorrespondapproximativelytothevalue of the variance of
the blue channel and half the varianceoftheothers. Local
edgedetectionforeachlayerbasedonmLij(.|gLij)isshownonFig.
10-(a)(c), wheretheredpointsrepresent the ignorant pixel which
commits the most belief tothe ignorance 12. As shown in Fig 10, the
edge detection ineach channel is very noisy. Our method allows to
commit auto-matically highest belief value to uncertainty for most
of
pixelsassociatedtoanedgewhichactuallycorrespondtonoises.6Theedgedetectionbasedonfusionresult
areinterestingasshown by Fig.11 and Fig. 12 because it shows the
ability of ouredgedetectortosuppressthenoiseeffects.
Forcomparison,we give on Fig. 13 and Fig.14, the performance of
Canny andSobel edgedetectors appliedclassicallyonthenoisygray-level
Lenaspicture. Inthissimulation, wetook=0.06,andtusingpn=0.22
max(g)andpe=0.39 max(g)ineachlayer with[tRn, tRe ] =[36, 20], [tGn,
tGe ] =[35, 19] and[tBn, tBe ] = [31, 18]. The decision-making is
still based on maxof credibility.The visual comparisonandanalysis
of results shownofgures1112clearlyindicatesthat
ouredgedetectorbasedon the fusion of belief constructed on each
layer works muchbetter than the edge detection applied separately
on eachlayer. Thereisnoignorant pixel
correspondingtoredcoloraccording to the fusion results, since the
fusion process of DSor PCR5 rule effectively decrease the
uncertainty. Our resultsshow also clearly that Canny and Sobel edge
detectors appliedtonoisygray-level Lenas
pictureareverysensitivetothenoise perturbations. Our proposed
method (based on DS ruleor onPCR5rule) ismorerobust
tothenoiseperturbationsand provides better results than Sobel or
Canny edge detectorfor suchnoisyimage. For thistestedimage, it
appearsthatthe results using DS and PCR5 rules are very close,
becausethere is not too much conict actually between bbas of
layersand one know that insuch case PCR5 rule behavior is
closetoDSrulebehavior. DSruleisusuallygoodenoughinthelowconict
case, whereas PCR5rule is preferredfor
thecombinationofhighconictingsourcesofevidence. Sothepreference of
PCR5 with respect to DS rule for edge detectionmust be guided by
the level of conict which appears in thelayers of the color image
that we need to process.6So we are also able at layer level to lter
these pixels (false alarms) beforeapplying the fusion. This has not
yet be done in this work.Figure 10. Edge detections in each channel
on noisy image.Figure 11. DS edge detector on noisy Lenas color
image.V. CONCLUSIONS AND PERSPECTIVESAnewunsupervisededgedetector
for color imagebasedon belief functions has been proposed in this
work. The basicbelief assignment (bba) associatedwiththeedgeof
apixelin each channel of the image is dened according to
itsgradient magnitude, and one can easily model the
uncertaintyabout our belief it belong or not to an edge. PCR5
andDSrules havebeenappliedinthis worktocombinethesebbastoget
theglobal bbafornal decision-making. Otherrules of combination of
bbas could also have been usedinsteadbut theyareknowntobeless
efcient thanPCR5orDSrulesinhighandlowconictcasesrespectively.
TheFigure 12. PCR5 edge detector on noisy Lenas color
image.Advances and Applications of DSmT for Information Fusion.
Collected Works. Volume 4349Figure 13. Sobels edge detector on
noisy Lenas gray image.Figure 14. Cannys edge detector on noisy
Lenas gray
image.fusionprocessisabletoreducenoiseperturbationsbecausethenoisesareassumedtobeindependentbetweenchannels.Thenal
decisionmakingontheedgecanbemadeeitheron the maximum of
credibility, plausibility,DSmPorBetPvaluesaswell. Therst
simulationdoneonoriginal Lenaspictureshows that our edgedetector
works as well as theclassical Sobels edge detector and it provides
less false alarmsthan with Cannys detector, but seems to generate
more miss-detections. In our second simulation based onnoisy
Lenaimage, theresultsshowthat ournewedgedetectorismorerobust to the
noise perturbations than Sobel or Canny classicaledge detectors. As
possible improvement of this algorithm andfor further research, we
would like to include some morpho-logical or connexity constraints
at a higher level of processingand develop automatic technique for
threshold selection. Theapplication of this new approach of edge
detection to satellitemultispectral images is under
investigations.REFERENCES[1] S. BenChaabane, F. Fnaiech, M. Sayadi,
E. Brassart,
RelevanceoftheDempster-Shaferevidencetheoryforimagesegmentation,
3rdInterna-tional conference on Signals, Circuits and Systems, Nov.
6-8th, Medenine,Tunisia, 2009.[2] J.F. Canny, A Computational
Approach to Edge Detection, IEEE Transac-tions on Pattern Analysis
and Machine Intelligence, 8(6):679-698, 1986.[3] C.J. Delcroix, M.
A. Abidi, Fusionof edgemapsincolorimages, inProc. SPIE Int. Soc.
Opt. Eng. 1001, pp. 454554, 1988.[4] N. Devillard, Image
Processing: the Lena story, 2006.http://ndevilla.free.fr/lena/, see
also http://www.lenna.org.[5] J. Dezert, F. Smarandache, Non
Bayesian conditioning and decondition-ing, Int. Workshop on Belief
Functions, Brest, France, April 2010.[6] J. Dezert, F. Smarandache,
A New Probabilistic Transformation of BeliefMass Assignment (DSmP),
in Proc. of Fusion 2008 Int. Conf., Cologne,Germany, June 30-July
3, 2008.[7] A.N. Evans, NonlinearEdgeDetectionincolorimages, Chap.
12, pp.329-356, in [13] .[8] A.K. Jain, Fundamentals of Digital
Image Processing, Prentice-Hall,Englewood Cliffs, NJ, 1991.[9] E.P.
Lyvers andOR. Mitchell, PrecisionEdgeContrast
andOrienta-tionEstimation, IEEETransactions onPatternAnalysis
andMachineIntelligence,10(6):927-937,1988.[10] A. Koschan, M.
Abidi, Detectionandclassicationof edgesincolorimages, IEEESignal
ProcessingMagazine, vol. 22, no. 1, pp. 6473,2005.[11] A. Koschan,
M. Abidi, Digital Color ImageProcessing, JohnWileyPress, NJ, USA,
376 pages, May 2008.[12] D. Marr, E. Hildreth, Theory of Edge
Detection, in Proceedings of theRoyal Society London 207, pp.
187217, 1980.[13] S. Marshall,G. L. Sicuranza (Editors), Advances
innonlinear signaland image processing, EURASIP Book series on
Signal Processing andCommunications, Hindawi Publishing
Corporation, 2006.[14] K. N. Plataniotis, A. N. Venetsanopoulos,
Color Image Processing andApplications, Springer, New York, NY,
USA, 2000.[15] M. A. Ruzon, C. Tomasi, Edge, junction,
andcornerdetectionusingcolor distributions, IEEE Transactions on
Pattern Analaysis and MachineIntelligence, vol. 23, no. 11, pp.
12811295, 2001.[16] G. Shafer,AMathematicalTheoryofEvidence,
Princeton Univ. Press,1976.[17] F. Smarandache, J. Dezert
(Editors), Advances and Applications of DSmTfor InformationFusion,
AmericanResearchPress, Rehoboth, Vol.1-3,2004-2009.[18] F.
Smarandache, J. Dezert, J.-M. Tacnet, Fusion of sources of
evidencewith different importances and reliabilities, in
Proceedings of Fusion 2010conference, Edinburgh, UK, July 2010.[19]
P. Smets, Constructingthepignisticprobabilityfunctioninacontextof
uncertainty, UncertaintyinArticial Intelligence, Vol. 5, pp.
2939,1990.[20] P. Smets, R. Kennes, The transferable belief model,
Artif. Intel., 66(2),pp. 191-234, 1994.[21] P. Vannoorenberghe, O.
Colot, D. De Brucq, Color Image
SegmentationUsingDempster-ShafersTheory, ICIP99. Proc. of
1999InternationalConference on Image Processing, Kobe, Japan, Vol.
4, pp. 300304, 1999.[22] L. Zadeh, Onthevalidityof Dempstersruleof
combination, MemoM79/24, Univ. of California, Berkeley, USA,
1979.Advances and Applications of DSmT for Information Fusion.
Collected Works. Volume 4350