Top Banner
Uncorrected Author Proof Journal of Intelligent & Fuzzy Systems xx (20xx) x–xx DOI:10.3233/IFS-131110 IOS Press 1 Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators 1 2 3 C. Guerra a,, A. Jurio a , H. Bustince a and C. Lopez-Molina a,b 4 a Department Automatica y Computacion, Universidad Publica de Navarra, Spain 5 b Department of Mathematical Modelling, Statistics and Bioinformatics, Ghent University, Gent, Belgium 6 Abstract. A large number of methods in the edge detection literature are only prepared to deal with monochannel images, which represent the value at each pixel by means of scalar values. This fact hinders their applicability to many fields in which multichannel are common, including remote sensing or medical imagery. Very often, multichannel images have to be turned into grayscale images on which edge detection can be performed, but this is coupled to a loss of information that can be unbearable in certain scenarios. In this work we propose a technique for multichannel edge feature fusion technique that can be combined with any edge detection method using scalar edge features. In this way, we can extend edge detection methods by considering an initial phase of monochannel feature extraction followed by a subsequent phase of multichannel feature fusion. For the information fusion we make use of Ordered Weighted Averaging (OWA) operators, which are able to vary the relevance of each of the features to be aggregated depending upon their value. As an example, our proposal is tested with the Upper-Lower Edge Detector, despite it can be further combined with a wide range of edge detectors. 7 8 9 10 11 12 13 14 15 16 Keywords: Image processing, edge detection, ordered weighted averaging, multichannel fusion 17 1. Introduction 18 A majority of the edge detection methods in the lit- 19 erature are only prepared for grayscale (monochannel) 20 images. In the early years of image processing this was 21 admissible, but recent developments in sensors have 22 made this limitation clearly undesirable. Apart from 23 very specific applications for which only monochannel 24 information is available (e.g. very low resolution cam- 25 eras or infrared imagery) most of the current imagery is 26 Corresponding author. C. Guerra, Department Automatica y Computacion, Universidad Publica de Navarra, Spain. Email: car- [email protected]. multichannel. Note that the multiplication of channels 27 can be due to the vectorial representation of color in 28 modern cameras or to the simultaneous use of several 29 sensors producing overlapped images. In the first case, 30 deeply studied in theory of digital color, the channels 31 might or might not be independent from each other. 32 In the second, mostly restricted to satellite imagery, 33 different channels provide independent but synergic 34 information of the scene. We refer as multichannel 35 images to the images associating a vector of values to 36 each pixel, regardless of its semantics. 37 The reasons why most of edge detection methods are 38 not prepared for multichannel images are diverse. How- 39 ever, a factor of major importance is the fact that edge 40 1064-1246/14/$27.50 © 2014 – IOS Press and the authors. All rights reserved
11

Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

May 15, 2023

Download

Documents

Alvaro Baraibar
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

Journal of Intelligent & Fuzzy Systems xx (20xx) x–xxDOI:10.3233/IFS-131110IOS Press

1

Multichannel generalization of theUpper-Lower Edge Detector using orderedweighted averaging operators

1

2

3

C. Guerraa,∗, A. Jurioa, H. Bustincea and C. Lopez-Molinaa,b4

aDepartment Automatica y Computacion, Universidad Publica de Navarra, Spain5

bDepartment of Mathematical Modelling, Statistics and Bioinformatics, Ghent University, Gent, Belgium6

Abstract. A large number of methods in the edge detection literature are only prepared to deal with monochannel images, whichrepresent the value at each pixel by means of scalar values. This fact hinders their applicability to many fields in which multichannelare common, including remote sensing or medical imagery. Very often, multichannel images have to be turned into grayscaleimages on which edge detection can be performed, but this is coupled to a loss of information that can be unbearable in certainscenarios. In this work we propose a technique for multichannel edge feature fusion technique that can be combined with any edgedetection method using scalar edge features. In this way, we can extend edge detection methods by considering an initial phaseof monochannel feature extraction followed by a subsequent phase of multichannel feature fusion. For the information fusion wemake use of Ordered Weighted Averaging (OWA) operators, which are able to vary the relevance of each of the features to beaggregated depending upon their value. As an example, our proposal is tested with the Upper-Lower Edge Detector, despite it canbe further combined with a wide range of edge detectors.

7

8

9

10

11

12

13

14

15

16

Keywords: Image processing, edge detection, ordered weighted averaging, multichannel fusion17

1. Introduction18

A majority of the edge detection methods in the lit-19

erature are only prepared for grayscale (monochannel)20

images. In the early years of image processing this was21

admissible, but recent developments in sensors have22

made this limitation clearly undesirable. Apart from23

very specific applications for which only monochannel24

information is available (e.g. very low resolution cam-25

eras or infrared imagery) most of the current imagery is26

∗Corresponding author. C. Guerra, Department Automatica yComputacion, Universidad Publica de Navarra, Spain. Email: [email protected].

multichannel. Note that the multiplication of channels 27

can be due to the vectorial representation of color in 28

modern cameras or to the simultaneous use of several 29

sensors producing overlapped images. In the first case, 30

deeply studied in theory of digital color, the channels 31

might or might not be independent from each other. 32

In the second, mostly restricted to satellite imagery, 33

different channels provide independent but synergic 34

information of the scene. We refer as multichannel 35

images to the images associating a vector of values to 36

each pixel, regardless of its semantics. 37

The reasons why most of edge detection methods are 38

not prepared for multichannel images are diverse. How- 39

ever, a factor of major importance is the fact that edge 40

1064-1246/14/$27.50 © 2014 – IOS Press and the authors. All rights reserved

Page 2: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

2 C. Guerra et al. / Multichannel generalization of the ULED using OWA operators

detection is very often based on the characterization of41

gradients in the image. The concept of gradient as the42

combination of orthogonal partial differences at each43

pixel of the image is of vast utility on monochannel44

images, but has a complicated fit when pixel infor-45

mation becomes vectorial. The underlying problem is,46

as stated by Toivanen et al., that it is not possible to47

define uniquely the ordering of multivariate data [39].48

Since the most employed edge detection methods are49

grounded in signal processing and convolution opera-50

tors (and make use of gradients), we can safely state that51

a large part of the edge detection methods in the litera-52

ture cannot be applied to color images. Very often, this53

impels the images to be collapsed to a monochannel rep-54

resentation before detecting edges. This comes coupled55

to a loss of information, which often hinders the success56

of edge detection methods. Even if non-linear aggrega-57

tion of channels is used, the dimensionality reduction58

might lead to indescernibility of vectorial tones in their59

scalar representation.60

There has been a significant effort in developing61

color-specific operators for gradient characterization.62

Examples of such efforts can be found in the works by63

Karakos and Trahanias [21] or Evans and Liu [12]. As64

stated by Zhu et al. [45], the two options to generate65

such operators are (a) the extension of monochrome66

techniques [33–35] and (b) the generation of dedicated67

color operators, often based on the analysis of vectorial68

spaces [11, 12, 38, 40]. Although it is not common,69

some authors rely on the applicability of segmenta-70

tion algorithms as starting point for edge detection.71

This is the case of the proposal by Huntsberger and72

Descalzi [19], which roughly consists of postprocessing73

the results obtained using Fuzzy c-Means [6]. How-74

ever, we believe that channelwise feature extraction,75

followed by a feature fusion phase, is a feasible option,76

mainly because it enables the reuse of the knowledge77

on edge detection gathered for grayscale images.78

In this work we propose a technique for multichan-79

nel gradient fusion. In this way, edge features can be80

characterized at each of the channels independently,81

then fused to produce a single edge interpretation. The82

fusion is performed using Ordered Weighted Aver-83

aging (OWA) operators, which have been throughly84

studied in the past years, and vary the relevance of85

each of the channels depending upon the value of the86

features.87

The remainder of this paper is as follows. In Section88

2 we introduce some preliminary concepts with util-89

ity in the subsequent sections. Section 3 is devoted to90

explain in detail our proposal, while Section 4 covers91

an experimental study of its utility. Our conclusions are 92

exposed in Section 5. 93

2. Preliminary definitions 94

This section recalls some basic definitions of the 95

concepts used hereafter. 96

2.1. Triangular norms and conorms 97

Definition 1. [4, 22] A t-norm T : [0, 1]2 → [0, 1] is 98

an associative, commutative, increasing function, such 99

that, T (1, x) = x for all x ∈ [0, 1]. A t-norm T is called 100

idempotent if T (x, x) = x for all x ∈ [0, 1]. 101

Definition 2. [4, 22] A t-conorm S : [0, 1]2 → [0, 1] is 102

an associative, commutative, increasing function such 103

that S(0, x) = x for all x ∈ [0, 1]. A t-conorm S is called 104

idempotent if S(x, x) = x for all x ∈ [0, 1]. 105

In-depth studies of t-norms and t-conorms together 106

with some other aggregation functions, can be found in 107

[1, 4, 22]. The application of these operators to image 108

processing has been tackled by several authors [7, 24]. 109

2.2. Ordered weighted averaging aggregation 110

operators 111

Definition 3. [44] A function w : [0, 1]n → [0, 1] iscalled an OWA operator of dimension n if thereexists a weighing vector h = (h1, . . . , hn) ∈ [0, 1]n

with∑

i hi = 1, and such that

w(a1, . . . , an) =n∑

j=1

hja(j) (1)

with a(j) the j-th greatest element of the ai, for any 112

(a1, . . . , an) ∈ [0, 1]n. 113

Any OWA operator is completely defined by its 114

weighing vector. In his original definition, Yager con- 115

sidered functions w defined on the whole Euclidean 116

space Rn and taking values in R, but for our interest 117

it is more appropriate to reduce this to [0, 1]n. In this 118

work we use the 3-place OWA operators in Table 1. 119

In the remainder of this work we consider images 120

to have dimensions of X and Y pixels. For the sake of 121

brevity, we consider P = {1, . . . , X} × {1, . . . , Y} to 122

be the set of their positions. We denote by IQ the set of 123

all images whose pixels take value in Q (i.e. I ∈ IQ if 124

and only if for all (x, y) ∈ P , I(x, y) ∈ Q). When the 125

dimensionality of the universe Q is greater than 1, an 126

Page 3: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

C. Guerra et al. / Multichannel generalization of the ULED using OWA operators 3

Table 1OWA operators used in this work

Name Weighing vector

w1 (1, 0, 0)

w2 ( 23 , 1

3 , 0)

w3 ( 12 , 1

2 , 0)

w4 ( 36 , 2

6 , 16 )

w5 ( 13 , 1

3 , 13 )

image I ∈ IQ is said to be multichannel. In those cases,127

I(i) refers to the i-th channel of the image I.128

3. Multichannel Upper-Lower Edge Detector129

3.1. Multichannel edge detection130

Multichannel images have become mainstream for131

industrial applications, and edge detection methods132

should be adapted to such reality. Since grayscale edge133

detection has been deeply studied, we consider that134

grayscale edge detection methods are a good starting135

point to tackled color edge detection. We identify three136

strategies to tackle multichannel edge detection using137

grayscale edge detectors:138

a) Channel fusion. This method consists of converting139

multichannel information to a monochannel repre-140

sentation, so that classical edge detection methods141

can be applied.142

b) Edge combination. In this case, edges are detected143

at each of the channels in completely independent144

processes, leading to as many edge images are chan-145

nels. Once all the edge images are produced, their146

result is combined in a sort of multiexpert voting147

phase.148

c) Feature fusion. This option is based on gathering149

edge features or indicators at each of the channels150

of the image, and subsequently fusing them in a151

unique representation that can be used to take a final152

decision on the presence of edges.153

The first option has been explored in the literature,154

together with the use of color spaces able to appro-155

priately represent the perceived differences between156

tones [42]. Indeed, very early studies on this option157

are dedicated to edge detection, such as that by Robin-158

son [32]. The main advantage of this strategy is that it159

enables the use of most of the existing edge detection160

methods, and that the computational overhead is lim-161

ited to the initial fusion of the channels [5, 28]. Indeed,162

Wesolkowski et al. state that perhaps it is more impor- 163

tant to find the appropriate color space representation 164

rather than design a new edge detection algorithm to 165

obtain superior performance for a color edge detection 166

algorithm [42]. However, it is a fact that in any form 167

of information fusion some data must be lost. By using 168

channel fusion some information about the tones must 169

be lost, hindering the good discrimination of edges in 170

the subsequent process of edge detection. Moreover, if 171

no a priori information about the images is available, 172

the safest fusion technique is the averaging of the chan- 173

nels, which can lead to very misleading results. For 174

example, the values (1, 0, 0) and (0, 0, 1) in the RGB 175

space become indistinguishable in their monochannel 176

(scalar) representation. 177

The second option allows different edge detection 178

methods to adapt to the the specific conditions of each 179

channel. However, it induces practical complications 180

due to the fact that the final result of an edge detection 181

method (binary images with thin edges) dramatically 182

burdens the fusion. That is, the information provided 183

to the ultimate edge combination phase is very limited. 184

More specifically, a lot of information is lost in thinning 185

and binarizing the edges, such the confidence of the 186

edge detection method about the existence of an edge at 187

each pixel (edginess). Consequently, the fusion reduces 188

to some sort of multi-expert binary voting. Note also 189

that slightly displaced edges at different channels must 190

be understood to be the same edge, although represented 191

at different positions of the image, what imposes the use 192

of correspondence methods to match the edge pixels at 193

different images. If ignoring the thinness constraints, 194

as in [13], edge combination becomes easier, but the 195

result is not according to the usual conventions. For 196

these reasons, we have discarded this strategy, despite 197

it allows for the use of the existing edge detectors. 198

In the third option the preservation of the tonal 199

information is extended to the maximum. Moreover, 200

the fusion is performed at the point were most of 201

the information (i.e. the image and the edge features) 202

is available, so that the fusion can be supported by 203

as much knowledge as possible. The main drawback 204

associated with this option is that the methods used 205

for edge detection have to be explicitly modified. In 206

our case, the modification consists of adding an extra 207

phase of feature fusion after extracting edge features at 208

each of the channels. Our proposal is based on OWA 209

operators, and consequently is only valid for edge detec- 210

tion methods producing scalar features. In order to 211

illustrate the proposal, we combine it with the Upper- 212

Lower Edge Detector (ULED). Section 3.2 depicts the 213

Page 4: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

4 C. Guerra et al. / Multichannel generalization of the ULED using OWA operators

ULED, while Section 3.3 explains the details of the214

feature fusion phase. The resulting method takes up215

Section 3.4.216

3.2. The Upper-Lower Edge Detector217

As proposed by Bustince et al. [8], fuzzy edge images218

can be constructed from the lengths of intervals of219

Interval-Valued (IV) images. An exhaustive study of220

this fact was performed in [3], where the authors posed221

three different objectives:222

– Defining two new concepts, denoted as lower223

constructor and upper constructor, to construct224

a new interval-valued image (I ∈ IL([0,1]), where225

L([0, 1]) represents the set of all possible closed226

intervals in [0, 1]) from a given grayscale image227

(I ∈ I[0,1]);228

– Generating edge images from IV images;229

– Applying these theoretical developments to real230

images.231

The upper and lower constructors are operators cre-232

ated to generate versions of an image that are brighter233

and darker that the original, respectively. In an early234

effort, they were designed to create IV representations235

of an image, since the results provided by them can be236

used as the bounds of the interval to be assigned with237

each position in the image. The IV image generated238

with these constructors has several properties, among239

which the authors highlight the fact that pixels around240

large intensity variations are assigned intervals whose241

length is greater than that of those pixels in homoge-242

neous regions. In this way, the length of the interval243

associated with each position of the image can be taken244

as an indicator of the edginess of the position, more245

precisely of its membership degree to the edges. Con-246

sequently, a fuzzy edge image can be constructed with247

the lengths of the intervals at each position.248

Definition 4. Let I ∈ I[0,1] be a grayscale image ofdimensions X and Y . Consider two t-norms T1 andT2 and two values n, m ∈ N so that n ≤ X−1

2 andm ≤ Y−1

2 . A lower constructor associated with T1, T2,n and m is given by:

Ln,mT1,T2

: I[0,1] → I[0,1] given by

Ln,mT1,T2

[I](i, j) =mn

T1u=−nv=−m

(T2(I(i − u, j − v), I(i, j))

)

for all (i, j) ∈ P . The values of n and m indicate that the 249

considered window is a matrix of dimension (2n + 1) × 250

(2m + 1) centered at (i, j). For the sake of simplicity, 251

if n = m then we denote Ln,mT1,T2

as LnT1,T2

. 252

Definition 5. Let I ∈ I[0,1]. Consider two t-conorms S1and S2 and two values n, m ∈ N such that n ≤ X−1

2 andm ≤ Y−1

2 . The upper constructor associated with S1,S2, n and m is defined as:

Un,mS1,S2

: I[0,1] → I[0,1], given by

Un,mS1,S2

[I](i, j) =mn

S1u=−nv=−m

(S2(I(i − u, j − v), I(i, j))

)

for all (i, j) ∈ P . The values of n and m indicate 253

that the considered window is a matrix of dimension 254

(2n + 1) × (2m + 1) centered at (i, j). For the sake of 255

clarity, if n = m then we denote Un,mS1,S2

as UnS1,S2

. 256

Let I ∈ I[0,1] and consider a lower constructor Ln,mT1,T2

and an upper constructor Un,mS1,S2

. Then

Ln,mT1,T2

[I](i, j) ≤ I(i, j) ≤ Un,mS1,S2

[I](i, j).

for all (i, j) ∈ P . This implies that the images produced 257

with upper and lower constructors can be used as bound- 258

aries for the creation of IV images. 259

Remark. The definition of lower constructor and upper 260

constructor should not be confused with the fuzzy mor- 261

phological operations of dilation and erosion [10], nor 262

with erosion and dilation defined in classical mathemat- 263

ical morphology [16]. 264

Let I ∈ I[0,1] and consider a lower constructor Ln,mT1,T2

and an upper constructor Un,mS1,S2

. Then, In,m ∈ IL([0,1]),defined as

In,m(i, j) = [Ln,mT1,T2

(i, j), Un,mS1,S2

(i, j)] (2)

generates an interval-valued version of the image, that 265

is, an image for which the value of each pixel is in 266

L([0, 1]) [20]. 267

After building the interval valued image In,m fromI Barrenechea et al. propose to construct a fuzzy edgeimage F [In,m] ∈ I[0,1] so that:

F [In,m](i, j) = Un,mS1,S2

[I](i, j) − Ln,mT1,T2

[I](i, j) (3)

for all (i, j) ∈ P . 268

When using lower and upper constructors, the length 269

of the interval associated with a position represents the 270

intensity variation in its neighborhood. Then, in the con- 271

struction of the fuzzy edge image, the length of the 272

Page 5: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

C. Guerra et al. / Multichannel generalization of the ULED using OWA operators 5

Algorithm 1. Procedure for the Upper-Lower Edge Detector(ULED).

interval represents the membership degree of each ele-273

ment to the edges. Note that the concept of membership274

degree to the edges is computationally equivalent to that275

of edginess. Besides, from the definitions of upper and276

lower constructors we have that, if there exists at least277

one white pixel and at least one black pixel in the neigh-278

bourhood, the length associated to a pixel is maximal.279

Consequently, that pixel is always considered an edge.280

The main advantage of the ULED is that, depend-281

ing on the lower and upper constructors we use, each282

pixel is associated with a different membership degree283

to the fuzzy edge images (corresponding to its inter-284

val length). This fact enables us to better adjust to the285

application in which we want to use the edge detec-286

tion method. An extension of the ULED was presented287

in [27], namely Directional ULED (DULED). This288

extension endow the use of several non-squared win-289

dows representing specific orientations in the image,290

and is used to generate vectorial representations of the291

edge features. However, in order to narrow down the292

scope of the experiment, in the remainder of this work293

we only consider the original upper and lower con-294

structors based on the t-norm minimum (TM) and the295

t-conorm maximum (SM). The reason is that this cou-296

ple of operators is the only one that guarantees that, if297

the window centered at each (i, j) has a constant inten-298

sity, then the length of the associated interval is zero.299

Therefore, a pixel in a flat (plain tone) region is never300

considered as part of an edge. The procedure of the301

ULED is included in Algorithm 1.302

3.3. Multichannel feature fusion using OWA303

operators304

When an image is composed of several channels305

(either representing color or not), all of them are meant306

to be meaningful, although their interest or reliability307

Fig. 1. Original image 124084 extracted from the BSDS300 test set.

might be heterogeneous. In the specific case of edge 308

detection, this means that edges (or, in a more general 309

way, local intensity variations) might appear at any pos- 310

sible channel. Hence, the information gathered at each 311

of the channels must be considered in the information 312

fusion phase. 313

As cited by Evans and Liu [12], Novak and Shafer 314

estimate that 10% of the edges are only visible in color 315

images [31]. Moreover, in some images the visibility of 316

the edges is reduced in the color to grayscale conver- 317

sion. An example of this fact can be seen in Figs. 1 and 318

2. In Fig. 1 we can observe a color image extracted from 319

the BSDS300 dataset, while in Fig. 2 we can observe the 320

ULED edge features than can be extracted from each 321

of its channels and from their average. In this figure it 322

is evident that the combination of the color channels 323

in a single grayscale image can lead to the disappear- 324

ance of some edges. However, the treatment of each 325

of the channels individually offers a better possibility 326

to appropriately detect the most relevant edges in the 327

image. 328

When fusing edge features at different channels one 329

must consider that, for each pixel, the presence of an 330

strong indicator at a single channel might be an indica- 331

tor of a perceivable edge, regardless of the absence of 332

such hints at other channels. That is, the presence of a 333

strong intensity variation in one of the channels might 334

be enough to consider the pixel as an edge. In fact, one of 335

the motivations to use multichannel images is that each 336

of the channels might capture information that is obliv- 337

ious to the others. Hence, when fusing edge features, 338

the greater values (presumably indicating the presence 339

of an edge) should be dominant over the smaller ones. 340

There exists a variety of options for feature fusion in 341

the literature. In this work we use aggregation functions, 342

which have been extensively studied [22]. Aggregation 343

Page 6: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

6 C. Guerra et al. / Multichannel generalization of the ULED using OWA operators

Fig. 2. Channel decomposition of the image in Fig. 1, together with the result of averaging the RGB channels. In the lower row we include theedge features extracted by ULED using n = m = 3 and (T1, T2, S1, S2) = (TM, TM, SM, SM).

functions, according to the most popular definitions [4],344

produce a scalar representation of the values in a vector345

in a monotonic way. Our proposal consists of produc-346

ing an aggregated estimation of the edge at each pixel347

from the value of the edge features obtained at each348

of the channels. Consequently, apart from the mono-349

tonicity, we demand several other characteristics to our350

fusion operators. First, we need our operators to satisfy351

symmetry, since a reordering of the channels should352

not result in a variation of the fused result. Second,353

we need the operators to be compensatory, since the354

resulting estimation should not be above the maximum355

edge feature of below the minimum. Third, we need356

our aggregation operators to give more influence to the357

channels producing the greatest values at each pixel,358

since the existence of an strong edge cue in one single359

channel might indicate the presence of an edge, despite360

it is not appreciable at some other channels.361

Among the families of compensatory, symmetric362

aggregation functions, the one that better fits our pour-363

poses is that of the OWA operators. If using a decreasing364

weighing vector, the greater values to be aggregated365

will always be assigned more relevance, regardless of366

their position in the vector. Hence, our strategy for367

multichannel edge feature fusion will be to aggregate368

the vector of features using an OWA operator with a369

decreasing weighing vector.370

3.4. Multichannel extension of the ULED371

Our proposal to extend the ULED for multiscale372

edge images is as depicted in Algorithm 2. We refer to373

Algorithm 2. Procedure for the Multichannel Upper-Lower EdgeDetector (MULED).

this edge detector as Multichannel ULED (MULED). 374

Note that the method can handle any number of chan- 375

nels, either representing color or any other multispectral 376

information. 377

4. Experimental validation 378

4.1. Aim of the experiment 379

The aim of this experiment is to check the valid- 380

ity of our proposal for edge detection on color images. 381

More specifically, we want to check whether our feature 382

fusion technique results in a significant improvement 383

of the performance compared to the most usual proce- 384

dure to deal with multichannel images, which is fusing 385

Page 7: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

C. Guerra et al. / Multichannel generalization of the ULED using OWA operators 7

ULED

ULED

ULED

������

Original image

Smooth image

Binary edges

Fuzzy edges

Binarization

GaussianSmoothing

Cn

C2

C1

Fig. 3. Schematic representation of the procedure used to obtain binary edge images using ULED. The image has been obtained from the BSDS300test set, and each of the channels is obtained from its RGB decomposition.

(averaging) the channels in the image prior to detect386

edges. Note that we do not intend neither to carry out the387

comparison with methods other that the ULED, nor to388

prove that MULED is better than other existing methods389

in the literature. Instead, we try to find out whether mul-390

tichannel feature fusion leads to results better than those391

of fusing the image. Consequently we only compare two392

alternatives for a unique method (ULED). Independent393

studies should be performed if using the presented mul-394

tichannel fusion technique with detectors other than the395

ULED.396

4.2. Experimental dataset397

In this experiment, we have used the Berkeley398

Segmentation Dataset (BSDS). This dataset offers a399

wide variety of natural images, together with several400

hand-made segmentations of each of them. Those seg-401

mentations can be considered as ideal solutions of the402

edge detection problem. It is our intention to test dif-403

ferent edge detectors and see how close their results are404

to the ideal ones. The images in the BSDS have res-405

olution 321 × 481 or 481 × 321, and are provided in406

grayscale. Each of them is associated to a set of 5 to 9407

binary human-made segmentations.408

4.3. Generating binary images409

The procedure in Algorithm 2 produces a fuzzy rep-410

resentation of the edges. However, an edge detector is411

meant to produce a binary representation of edges, pro- 412

vided in the shape of thin lines. Hence, there is a need 413

for binarizing the edges after the generation of the fuzzy 414

representation. In our case this is achieved by using 415

the non-maxima suppression method by Smith [36], in 416

combination with the hysteresis threshold determina- 417

tion technique by Medina-Carnicer et al. [30]. Since 418

the images used for this experiment are natural, they 419

might contain some kind of noise or contamination. 420

Consequently, we need to perform some regularization 421

prior to edge feature extraction, in order to minimize 422

its impact in the final edges. In this case, the regu- 423

larization is carried out using Gaussian filters, which 424

standard deviation is referred to as σr. The schematic 425

representation of our proposal is shown in Fig. 3. 426

4.4. Measuring the performance of an edge 427

detector 428

There exists an open debate on the best way to evalu- 429

ate the performance of an edge detector, which is being 430

boosted by recent works [17, 18, 25]. In this work we use 431

the methodology by Martin et al. [29], which is based on 432

the use of classification measures. This metholodogy is 433

grounded in the fact that edge detection can be seen as 434

a binary classification problem. Hence, it can be eval- 435

uated in terms of success and fallout, comparing the 436

output of an edge detection method with that generated 437

by a human, which we can consider as ground truth. In 438

this way, we build a confusion matrix such as the one 439

Page 8: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

8 C. Guerra et al. / Multichannel generalization of the ULED using OWA operators

Edge Non-Edge

Reality

Edge

Non-Edge

Cla

ssifi

catio

n

TP

FN TN

FP

Fig. 4. Confusion matrix for the edge detection problem.

in Fig. 4, with the elements in the main diagonal being440

the ones correctly classified.441

There are some considerations to be taken into442

account, due to the very particular conditions of the443

edge detection problem. More specifically, to the fact444

that edges are not simply a subset of positions of the445

image, since they contain embedded spatial informa-446

tion. For example, it is clear that an edge displaced447

from its true position should not be penalized as much448

as when it would be completely missing. In such a situ-449

ation, a displaced edge should not be penalized as much450

as a false positive and a false negative. In order to solve451

this problem, we use a one-to-one pixel matching algo-452

rithm to map the edge pixels in the candidate edge image453

(generated by an edge detection method) and the ground454

truth. This matching allows for a certain spatial toler-455

ance (in our case, as much as 1% of the diagonal of456

the image), so that an edge pixel can be slightly moved457

from its true position, yet being considered as correctly458

classified. In order to do the pixel-to-pixel matching459

we use the Cost Scaling Algorithm by Goldberg and460

Kennedy [15].461

From the confusion matrix we extract the precisionand recall evaluations, defined as

Prec = TP

TP + FPand Rec = TP

TP + FN. (4)

Precision and recall measures are preferred for mea-suring the performance over other alternatives typicallyused in ROC analysis [41]. The precision and recallmeasures hold good stability properties when the sizeof the image varies [2]. Moreover, they avoid consider-ing TN, which is much larger than the other elementsin a typical edge detection confusion matrix, and hencedistorts the results. Although Prec and Rec measuresillustrate specific aspects of the problem. some scalarevaluation is needed to assess the overall quality of anedge image. We use the F-measure [14], defined as

Fα = Prec · Rec

α Prec +(1 − α) Rec, (5)

where α is a value modulating the relative impact of 462

the Prec and Rec values; We adhere to the commonly 463

used F0.5. Note that F0.5 is the harmonic mean of Prec 464

and Rec. In this way, we evaluate three different facets 465

of the problem: the accuracy (using Prec), the fallout 466

(using Rec) and the overall quality (using F0.5). 467

4.5. Results 468

The results gathered in the experiment are included in 469

Fig. 5, divided by the size of the square neighbourhood 470

(n) considered in the application of the ULED. For each 471

size we consider three values of σr, and for each of them 472

we display the average performance (in terms of F0.5, 473

Prec and Rec), the average ranking (Rank.) and the 474

number of best and worst results (B/W) obtained. Note 475

that, in order to compute the average ranking and the 476

number of best and worst results, we only consider the 477

candidates using the same n and σr. 478

In Fig. 5 we observe that the results obtained by 479

MULED are almost always better than those by the orig- 480

inal ULED. This can be seen in terms of F0.5, as well as 481

in Rank. and B/W. Indeed, the situations in which the 482

ULED outperforms the MULED are restricted to con- 483

figurations with low values of σr. When σr increases, 484

the MULED stands as a better option. 485

The low performance of the MULED when σr = 1.0 486

is explainable from the information fusion technique. 487

When such a regularization setting is used, an important 488

number of imperfections (e.g. speckle) or fine textures 489

are not removed from the image. If using the MULED 490

the presence of these artifacts forces the detection of 491

significant edge hints in, at least, one of the channels. 492

Consequently the MULED can finally classify them as 493

edges (specially if using w1 or w2), generating a large 494

number of false positives. This can be partially solved 495

by mixing the color channels prior to edge detection, as 496

happens in the ULED. Considering this we can assert 497

that the MULED is more sensitive to false detections 498

than ULED, at least regarding the presence of spuri- 499

ous artifacts in non-heavily regularized images. This 500

problem is specially acute when the weighing vector 501

used for feature fusion emphasizes the importance of 502

the greatest feature value. 503

When greater values of σ are used, the noise and 504

imperfections of the image tend to disappear, although 505

this comes coupled to a progressive regularization of 506

the edges. This trade-off has been extensively studied 507

in the literature, giving rise to the Gaussian Scale- 508

Space [23, 26, 43]. When σr ∈ {2.0, 3.0}, the MULED 509

produces better results than the ULED, since it avoids 510

Page 9: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

C. Guerra et al. / Multichannel generalization of the ULED using OWA operators 9

Fig. 5. Results gathered in the comparison of the ULED and the MULED. For each possible neighbourhood size and σr we list the averageperformance (F0.5), precision (Prec), recall (Rec), ranking (Rank.) and the number of images of the BSDS for which it is the best (B) andworst (W) performer.

the aforementioned problem of the false positives while511

successfully facing the detection of blurred edges. In512

these situations, the optimism of the MULED with513

respect to the presence of edges render in very good514

performance. Indeed, it can be observed that the perfor-515

mance of the ULED dramatically decreases from σr =516

1.0 to σr = 2.0, while that of the MULED increases517

Note that this can be observed in terms of F0.5, but is 518

also noteworthy in terms of Rank. or B/W. 519

In this experiment the best possible results are 520

reached in the combination of MULED with great val- 521

ues of σr. There is no significant difference in the setting 522

of the neighbourhood size, so n ∈ {3, 5} stand as a 523

safe option. With respect to w, although there are no 524

Page 10: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

10 C. Guerra et al. / Multichannel generalization of the ULED using OWA operators

dramatic differences in the average performance (F0.5),525

both Rank. and B/W have to be carefully analyzed in526

order to fix the settings for specific applications. From527

the results in Fig. 5 we recommend w2 and, if using528

a very high value of σr, also w1. Note that w1 obtains529

the best average results, but at the same time performs530

worse than any other OWA operator for a large number531

of images, as we can observe in the results regarding532

B/W. Hence, if a more conservative option is preferred,533

w3 or w4 can arise as a better choice.534

5. Conclusions535

We have proposed a methodology to adapt existing536

edge detectors to multichannel images. Our proposal537

consists of performing monochannel feature extraction538

followed by a novel technique of multichannel feature539

fusion based on Ordered Weighted Averaging (OWA)540

operators. So far our proposal can only be combined541

with edge detections generating scalar features for edge542

characterization. We have combined our proposal with543

the Upper-Lower Edge Detector (ULED), giving rise544

to Multichannel ULED (MULED). Our experimental545

tests with color images have illustrated how applying an546

explicit multichannel approach can lead to results better547

than averaging color aggregation. Our proposal is very548

flexible, but does not involve complicated paradigms,549

and is well grounded in the extensively studied field of550

aggregation operators.551

As future work we propose two extensions to com-552

plete our proposal. First, we intend to create a way to553

automatically train the aggregation operators used for554

feature fusion, so that they can adapt to specific con-555

ditions of the datasets. Second, we propose to allow556

multidimensional edge features to be aggregated, so557

that popular methods such as the Sobel method [37]558

or the Canny method [9] (which use vectorial features)559

can be combined with our multichannel feature fusion560

technique.561

References562

[1] C. Alsina, M.J. Frank and B. Schweizer, Associative Func-563

tions: Triangular Norms and Copulas, World Scientific564

Publishing Company, 2006.565

[2] P. Arbelaez, M. Maire, C. Fowlkes and J. Malik, Contour566

detection and hierarchical image segmentation, IEEE Trans567

On Pattern Analysis and Machine Intelligence 33 (2011),568

898–916.569

[3] E. Barrenechea, H. Bustince Sola, B. De Baets and C. Lopez-570

Molina, Construction of interval-valued fuzzy relations with571

application to the generation of fuzzy edge images, IEEE Trans 572

on Fuzzy Systems 19(5) (2011), 819–830. 573

[4] G. Beliakov, A. Pradera and T. Calvo, Aggregation functions: 574

A guide for Practitioners, volume 221 of Studies in Fuzziness 575

and Soft Computing, Springer, 2007. 576

[5] A. Ben Hamza, Y. He, H. Krim and A. Willsky, A multiscale 577

approach to pixel-level image fusion, Integrated Computer- 578

Aided Engineering 12 (2005), 135–146. 579

[6] J.C. Bezdek, R. Ehrlich and W. Full, FCM: The fuzzy 580

c-means clustering algorithm, Computers & Geosciences 581

10(2-3) (1984), 191–203. 582

[7] I. Bloch, Fuzzy relative position between objects in image 583

processing: A morphological approach, IEEE Trans on 584

Pattern Analysis and Machine Intelligence 21(7) (1999), 585

657–664. 586

[8] H. Bustince, E. Barrenechea, M. Pagola and J. Fernandez, 587

Interval-valued fuzzy sets constructed from matrices: Applica- 588

tion to edge detection, Fuzzy Sets and Systems 160(13) (2009), 589

1819–1840. 590

[9] J. Canny, A computational approach to edge detection, IEEE 591

Trans on Pattern Analysis and Machine Intelligence 8(6) 592

(1986), 679–698. 593

[10] B. De Baets, Fuzzy Techniques in Image Processing, chapter 594

Generalized idempotence in fuzzy mathematical morphology, 595

Physica-Verlag, 2000, pp. 58–75. 596

[11] R. Dony and S. Wesolkowski, Edge detection on color images 597

using RGB vector angles, In Proc of the IEEE Canadian Con- 598

ference on Electrical and Computer Engineering 2 (1999), 599

687–692. 600

[12] A. Evans and X.U. Liu, A morphological gradient approach to 601

color edge detection, IEEE Trans on Image Processing 15(6) 602

(2006), 1454–1463. 603

[13] J. Fan, W.G. Aref, M.-S. Hacid and A.K. Elmagarmid, An 604

improved automatic isotropic color edge detection technique, 605

Pattern Recognition Letters 22(13) (2001), 1419–1429. 606

[14] M.K. Geetha and S. Palanivel, Video classification and shot 607

detection for video retrieval applications, International Jour- 608

nal of Computational Intelligence Systems 2(1) (2009), 39–50. 609

[15] A.V. Goldberg and R. Kennedy, An efficient cost scaling 610

algorithm for the assignment problem, Mathematical Pro- 611

gramming 71 (1995), 153–177. 612

[16] R.M. Haralick, S.R. Sternberg and X. Zhuang, Image analy- 613

sis using mathematical morphology, IEEE Trans on Pattern 614

Analysis and Machine Intelligence 9(4) (1987), 532–550. 615

[17] X. Hou, A. Yuille and C. Koch, A meta-theory of bound- 616

ary detection benchmarks, In Proc of the NIPS Workshop on 617

Human Computation for Science and Computational Sustain- 618

ability, (2012). 619

[18] X. Hou, A. Yuille and C. Koch, Boundary detection 620

bench-marking: Beyond F-Measures, In IEEE Conference 621

of Computer Vision and Pattern Recognition, 2013, pp. 622

2123–2130. 623

[19] T. Huntsberger and M. Descalzi, Color edge detection, Pattern 624

Recognition Letters 3(3) (1985), 205–209. 625

[20] A. Jurio, D. Paternain, C. Lopez-Molina, H. Bustince, R. 626

Mesiar and G. Beliakov, A construction method of interval- 627

valued fuzzy sets for image processing, In Proc of the IEEE 628

Symposium on Advances in Type-2 Fuzzy Logic Systems, 2011. 629

[21] D. Karakos and P. Trahanias, Generalized multichannel image- 630

filtering structures, IEEE Trans on Image Processing 6(7) 631

(1997), 1038–1045. 632

[22] E.P. Klement, R. Mesiar and E. Pap, Triangular Norms, Kluwer 633

Academic Publishers, (2000). 634

[23] T. Lindeberg, Edge detection and ridge detection with 635

Page 11: Multichannel generalization of the Upper-Lower Edge Detector using ordered weighted averaging operators

Unc

orre

cted

Aut

hor P

roof

C. Guerra et al. / Multichannel generalization of the ULED using OWA operators 11

automatic scale selection, International Journal of Computer636

Vision 30(2) (1998), 117–156.637

[24] C. Lopez-Molina, H. Bustince, M. Galar, J. Fernandez and638

B. De Baets, On the use of t-conorms in the gravity-based639

approach to edge detection. In Proc of the International Con-640

ference on Intelligent Systems Design and Applications, pages641

1347-1352.642

[25] C. Lopez-Molina, B. De Baets and H. Bustince, Quantitative643

error measures for edge detection, Pattern Recognition 46(4)644

(2013), 1125–1139.645

[26] C. Lopez-Molina, B. De Baets, H. Bustince, J. Sanz and E.646

Barrenechea, Multiscale edge detection based on Gaussian647

smoothing and edge tracking, Knowledge-Based Systems 44648

(2013), 101–111.649

[27] C. Lopez-Molina, M. Galar, H. Bustince and B. Baets, Extend-650

ing the upper-lower edge detector by means of directional651

masks and owa operators, Progress in Artificial Intelligence652

1 (2012), 267–276.653

[28] X. Luo, X. Wu and Z. Zhang, Regional and entropy compo-654

nent analysis based remote sensing images fusion, Journal of655

Intelligent and Fuzzy Systems, 2013, In press.656

[29] D. Martin, C. Fowlkes and J. Malik, Learning to detect natu-657

ral image boundaries using local brightness, color, and texture658

cues, IEEE Trans on Pattern Analysis and Machine Intelli-659

gence 26(5) (2004), 530–549.660

[30] R. Medina-Carnicer, F. Madrid-Cuevas, A. Carmona-Poyato661

and R. Munoz-Salinas, On candidates selection for hysteresis662

thresholds in edge detection, Pattern Recognition 42(7) (2009),663

1284–1296.664

[31] C.L. Novak and S.A. Shafer, Proceedings of DARPA Image665

Understanding workshop, chapter Color edge detection, 1987.666

[32] G.S. Robinson, Color edge detection, Optical Engineering667

16(5) (1977), 479–484.668

[33] F. Russo and A. Lazzari, Color edge detection in presence of669

gaussian noise using nonlinear prefiltering, IEEE Trans On670

Image Processing 54(1) (2005), 352–358.671

[34] M. Ruzon and C. Tomasi, Color edge detection with the com-672

pass operator, In Proc of the IEEE Conference on Computer673

Vision and Pattern Recognition 2 (1999), 160–166.

[35] G. Shapiro, Color snakes, Computer Vision and Image Under- 674

standing 68(2) (1997), 247–523. 675

[36] S.M. Smith and J.M. Brady, SUSAN- a new approach to low 676

level image processing, International Journal of Computer 677

Vision 23 (1997), 45–78. 678

[37] I. Sobel and G. Feldman, A 3x3 isotropic gradient operator for 679

image processing, Presented at a talk at the Stanford Artificial 680

Intelligence Project, 1968. 681

[38] C. Theoharatos, G. Economou and S. Fotopoulos, Color edge 682

detection using the minimal spanning tree, Pattern Recognition 683

38(4) (2005), 603–606. 684

[39] P.J. Toivanen, J. Ansamaki, J.P.S. Parkkinen and J. 685

Mielikainen, Edge detection in multispectral images using 686

the self-organizing map, Pattern Recognition Letters 24(16) 687

(2003), 2987–2994. 688

[40] P. Trahanias and A. Venetsanopoulos, Color edge detection 689

using vector order statistics, IEEE Trans on Image Processing 690

2(2) (1993), 259–264. 691

[41] W. Waegeman, B. De Baets and L. Boullart, ROC analysis in 692

ordinal regression learning, Pattern Recognition Letters 29(1) 693

(2008), 1–9. 694

[42] S. Wesolkowski, M. Jernigan and R. Dony, Comparison of 695

color image edge detectors in multiple color spaces, In Proc of 696

the International Conference on Image Processing 2 (2000), 697

796–799. 698

[43] A.P. Witkin, Scale-space filtering, In Proc of the Interna- 699

tional Joint Conference on Artificial Intelligence 2, (1983), 700

1019–1022. 701

[44] R. Yager, On ordered weighted averaging aggregation opera- 702

tors in multicriteria decisionmaking, IEEE Trans on Systems, 703

Man and Cybernetics 18(1) (1988), 183–190. 704

[45] S.-Y. Zhu, K.N. Plataniotis and A.N. Venetsanopoulos, 705

Comprehensive analysis of edge detection in color image pro- 706

cessing, Optical Engineering 38(4) (1999), 612–625. 707