Top Banner
1 Supplementary Materials for ‘Salient Object Detection: A Discriminative Regional Feature Integration Approach’ Huaizu Jiang, Zejian Yuan, Ming-Ming Cheng, Yihong Gong Nanning Zheng, and Jingdong Wang Abstract—In this supplementary material, we will present more details on learning a Random Forest saliency regressor. More evaluation results with state-of-the-art algorithms are also presented. 1 LEARNING 1.1 Learning a Similarity Score between Two Ad- jacent Superpixels To learn the similarity score of two adjacent su- perpixels s i and s j , they are described by a 222- dimensional feature vector, including their saliency features, feature contrast, and the geometry features between them. Saliency features are already intro- duced in our paper. Feature contrast and superpixel boundary geometry features are presented in Fig. 1. 1.2 Feature importance in a Random Forest Training a Random Forest regressor is to indepen- dently build each decision tree. For the t-th deci- sion tree, the training samples are randomly drawn from all training samples with replacement, X t = {x (t1) , x (t2) , ··· , x (t Q ) }, A t = {a (t1) ,a (t2) , ··· ,a (t Q ) }, where t i [1,Q],i [1,Q]. When learning a decision tree, the training samples are randomly drawn with replacement. In another word, some samples are not used for training. These training samples are called out-of-bag (oob) data. After constructing a decision tree, those oob data can be utilized to estimate the importance of features. Suppose that the feature f was used to construct one of the nodes of the tree and D oob are the oob samples. We first compute the prediction error for these oob data based on the i-th decision tree. E(f,i)= X j∈D oob ˜ a (j) i (f ) - a (j) 2 , (1) where ˜ a (j) i (f ) is the prediction value of the j -th oob sample given by the i-th tree based on the feature f . The feature f are then randomly permuted among H. Jiang, Z. Yuan, Y. Gong and N. Zheng, are with Xi’an Jiaotong University. M.M. Cheng is with Oxford University. J. Wang is with Microsoft Research Asia. A preliminary version of this work appeared at CVPR [1]. Project website jianghz.com/drfi. Feature Contrast Dim c1. abs. diff. of average RGB values 3 c2. χ 2 distance of RGB histogram 1 c3. abs. diff. of average HSV values 3 c4. χ 2 distance of HSV histogram 1 c5. abs. diff. of average L*a*b* values 3 c6. χ 2 distance of L*a*b* histogram 1 c7. abs. diff. of average responses of filter bank 15 c8. χ 2 distance of maximum response of filter bank 1 c9. χ 2 distance of texton histogram 1 Boundary Geometry Dim g1. average x coordinates 1 g2. average y coordinates 1 g3. 10th percentile of x coordinates 1 g4. 10th percentile of y coordinates 1 g5. 90th percentile of x coordinates 1 g6. 90th percentile of y coordinates 1 g7. normalized length 1 Fig. 1. Feature contrast and superpixel boundary geometry features between two adjacent superpixels. all the oob samples. The permuted prediction error is then computed E p (f,i)= X j∈D oob ˜ b (j) i (f ) - a (j) 2 , (2) where ˜ b (j) i (f ) is the prediction value of the j -th oob sample given by the i-th tree based on the randomly permuted feature f . Finally, the importance of feature f can be computed as I (f )= T X i=1 E p (f,i) - E(f,i) T . (3) The importance measure of features can be interpreted this way: if a feature is not so important, the pre- diction value will not be affected too much even if it is randomly set. Otherwise randomly permuted an important feature will greatly influence the prediction value. Therefore, the difference of prediction values I (f ) over all the decision trees can be used to measure a feature’s importance.
14

Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

Mar 30, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

1

Supplementary Materials for lsquoSalient Object Detection ADiscriminative Regional Feature Integration Approachrsquo

Huaizu Jiang Zejian Yuan Ming-Ming Cheng Yihong GongNanning Zheng and Jingdong Wang

AbstractmdashIn this supplementary material we will present more details on learning a Random Forest saliency regressor Moreevaluation results with state-of-the-art algorithms are also presented

F

1 LEARNING

11 Learning a Similarity Score between Two Ad-jacent Superpixels

To learn the similarity score of two adjacent su-perpixels si and sj they are described by a 222-dimensional feature vector including their saliencyfeatures feature contrast and the geometry featuresbetween them Saliency features are already intro-duced in our paper Feature contrast and superpixelboundary geometry features are presented in Fig 1

12 Feature importance in a Random Forest

Training a Random Forest regressor is to indepen-dently build each decision tree For the t-th deci-sion tree the training samples are randomly drawnfrom all training samples with replacement Xt =x(t1)x(t2) middot middot middot x(tQ) At = a(t1) a(t2) middot middot middot a(tQ)where ti isin [1 Q] i isin [1 Q] When learning a decisiontree the training samples are randomly drawn withreplacement In another word some samples are notused for training These training samples are calledout-of-bag (oob) data After constructing a decisiontree those oob data can be utilized to estimate theimportance of features Suppose that the feature f wasused to construct one of the nodes of the tree and Doob

are the oob samples We first compute the predictionerror for these oob data based on the i-th decisiontree

E(f i) =sum

jisinDoob

(a(j)i (f)minus a(j)

)2 (1)

where a(j)i (f) is the prediction value of the j-th oob

sample given by the i-th tree based on the featuref The feature f are then randomly permuted among

bull H Jiang Z Yuan Y Gong and N Zheng are with Xirsquoan JiaotongUniversity MM Cheng is with Oxford University J Wang is withMicrosoft Research Asia

bull A preliminary version of this work appeared at CVPR [1]bull Project website jianghzcomdrfi

Feature Contrast Dimc1 abs diff of average RGB values 3c2 χ2 distance of RGB histogram 1c3 abs diff of average HSV values 3c4 χ2 distance of HSV histogram 1c5 abs diff of average Lab values 3c6 χ2 distance of Lab histogram 1c7 abs diff of average responses of filter bank 15c8 χ2 distance of maximum response of filter bank 1c9 χ2 distance of texton histogram 1Boundary Geometry Dimg1 average x coordinates 1g2 average y coordinates 1g3 10th percentile of x coordinates 1g4 10th percentile of y coordinates 1g5 90th percentile of x coordinates 1g6 90th percentile of y coordinates 1g7 normalized length 1

Fig 1 Feature contrast and superpixel boundarygeometry features between two adjacent superpixels

all the oob samples The permuted prediction error isthen computed

Ep(f i) =sum

jisinDoob

(b(j)i (f)minus a(j)

)2 (2)

where b(j)i (f) is the prediction value of the j-th oob

sample given by the i-th tree based on the randomlypermuted feature f Finally the importance of featuref can be computed as

I(f) =

Tsumi=1

Ep(f i)minus E(f i)

T (3)

The importance measure of features can be interpretedthis way if a feature is not so important the pre-diction value will not be affected too much even ifit is randomly set Otherwise randomly permuted animportant feature will greatly influence the predictionvalue Therefore the difference of prediction valuesI(f) over all the decision trees can be used to measurea featurersquos importance

2

Fig 2 Average annotation map of the DUT-OMRONdata set

2 ADDITIONAL EVALUATIONS

Extensive evaluations have been performed onsix datasets including MSRA-B1 [2] iCoSeg2 [3]SED23 [4] ECSSD4 [5] DUT-OMRON5 [6] and DUT-OMRON data set

In this section we will provide additional quantita-tive and qualitative comparisons with state-of-the-artapproaches for more comprehensive evaluation

21 DUT-OMRON Data SetIn the paper we sample 635 images from the DUT-OMRON data set (we call it DUT-OMRON data set)where salient objects touch the image border andare far from the image center in order to check therobustness of our approach The average annotation ofsalient objects is shown in Fig 2 As we can see thereis a strong off-center bias Moreover it can be clearlyseen that salient objects touch the image border Thesetwo factors make DUT-OMRON data set challengingfor our approach which is dependent on the pseudo-background assumption and the geometric distribu-tions of salient objects discovered from training im-ages We also present some sample images in Fig 3The complete list of images in DUT-OMRON data setis available at our project webpage jianghzcomdrfi

22 Additional Benchmark Data SetsDue to limited space we only consider six benchmarkdata sets in the paper for evaluations In this sup-plementary material we will provide more results onother three data sets which are widely adopted forsalient object detection evaluation as well

MSRA1k6 This data set [7] containing 1000 im-ages sampled from MSRA-B [2] is the first large-

1 httpresearchmicrosoftcomen-usumpeoplejiansun2 httpchenlabececornelleduprojectstouch-coseg3 httpwwwwisdomweizmannacilsimvisionSeg Evaluation DB4 httpwwwcsecuhkeduhkleojiaprojectshsaliency5 httpicedluteducnluDUT-OMRONhomepagehtm6 httpivrgwwwepflchsupplementary materialRK CVPR09

Fig 3 Demonstration of sample images of the DUT-OMRON dataset

3

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

Recall

Pre

cis

ion

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 4 Quantitative comparisons of saliency maps produced by different approaches on different data sets interms of PR curves

scale data set for salient object detection with pixel-wise groundtruth annotation However performanceof recent approaches start to be saturated on thisbenchmark data set as the background and salientregions are relatively homogeneous and simple Sincesome images of MSRA1k are chosen as training dataof our approach We discard these images for testingphrase There are 396 images left

SED17 It has 100 images containing exactly onesalient object in each image Pixel-wise groundtruthannotations for the salient objects are provided Sim-ilar to MSRA1k it is a relatively simple data set withclean background

SOD8 This data set [8] is a collection of salient objectboundaries based on the Berkeley segmentation dataset [9] Seven subjects are asked to choose the salientobject(s) in 300 images We generate the pixel-wiseannotation of the salient objects as [10] This data setcontains many images with multiple objects making

7 httpwwwwisdomweizmannacilsimvisionSeg Evaluation DB8 httpelderlabyorkucaSOD

it challenging

23 Additional Quantitative ComparisonsIn the paper we provide the quantitative comparisonsof our approach (DRFIs and DRFI) on six benchmarkdata sets In this section we provide additional resultson MSRA1k SED1 and SOD data sets In additionto the PR curve ROC curve and AUC (Area UnderROC Curve) scores we also report the MAE (MeanAbsolute Error) scores of each approach MAE scoredirectly reflects the mean absolute difference of thegroundtruth annotation and a saliency map

Similar to the paper we compare our method with12 state-of-the-art approaches including SVO [11]CA [12] CB [13] RC [14] SF [15] LRK [16] HS [17]GMR [6] PCA [18] MC [19] DSR [20] and RBD [21]PR curves and ROC curves are plotted in Fig 4 andFig 5 respectively AUC scores and MAE scores arepresented in Fig 6 and Fig 7

As can be seen our approach DRFIs is slightly bet-ter than most of state-of-the-art methods on MSRA1kand SED1 according to either PR or ROC curves

4

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 5 Quantitative comparison of saliency maps produced by different approaches on different data sets interms of ROC curves

With multi-level enhancement our approach DRFIperforms much better On the challenging SOD dataset DRFIs and DRFI both significantly outperformother methods Regarding AUC scores DRFI per-forms consistently the best on MSRA1k SED1 andSOD and DRFIs is ranked as the second best on SED1and SOD

Since our approach computes the saliency scorefor each region independently the smoothness con-straint of adjacent regions is ignored Therefore ourapproach does not perform in terms of MAE scores aswell as in terms of AUC scores In specific DRFI stillperforms the best on three benchmark data sets andthe thrid best on SOD DRFIs is ranked as the best ontwo data sets the second best on three data sets andthe third best on two data sets

Please note more sophisticated post-processingsteps are utilized in other top-performing approachesFor instance DSR adopts Bayesian integration (in ad-dition to the multi-level enhancement) The quadraticand manifold diffusion methods are adopted by RBDand GMR respectively As we stated in the paper

about our future work better MAE scores can be ex-pected if more advanced post-processing is integrated

24 Additional Qualitative ComparisonsIn this supplementary material we present morequalitative comparisons of different approaches onall benchmark data sets Saliency maps of randomlychosen images of each benchmark data set is shownfrom Fig 8 to Fig 16

Generally our approach can produce more appeal-ing saliency maps than other approaches See thesecond and six rows of Fig 8 For the extremelychallenging DUT-OMRON data set our approachcan generate slightly better saliency maps than othersin most cases See Fig 13 for examples

REFERENCES[1] H Jiang J Wang Z Yuan Y Wu N Zheng and S Li ldquoSalient

object detection A discriminative regional feature integrationapproachrdquo in IEEE CVPR 2013 pp 2083ndash2090

[2] T Liu Z Yuan J Sun J Wang N Zheng X Tang and H-Y Shum ldquoLearning to detect a salient objectrdquo IEEE TPAMIvol 33 no 2 pp 353ndash367 2011

5

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0899 0861 0799 0866 0834 0793 0954 0862 0777CA 0860 0837 0738 0815 0854 0760 0888 0861 0782CB 0930 0852 0819 0831 0825 0624 0959 0880 0737RC 0937 0880 0833 0859 0840 0679 0976 0872 0821SF 0917 0911 0777 0803 0872 0715 0962 0879 0763LRK 0925 0908 0810 0859 0881 0758 0958 0879 0790HS 0930 0882 0829 0860 0820 0735 0971 0910 0811GMR 0942 0902 0834 0853 0831 0646 0976 0919 0795PCA 0938 0895 0817 0887 0903 0776 0975 0905 0822MC 0951 0898 0849 0887 0863 0715 0978 0929 0838DSR 0956 0921 0856 0899 0895 0776 0982 0927 0841RBD 0945 0941 0840 0894 0873 0779 0980 0915 0820DRFIs 0954 0944 0858 0910 0902 0804 0976 0951 0854DRFI 0971 0968 0875 0931 0933 0822 0988 0972 0883

Fig 6 AUC (Area Under ROC Curve) scores of different approaches (larger is better) The best three resultsare highlighted with red green and blue fonts respectively

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0349 0328 0420 0409 0308 0445 0332 0339 0394CA 0247 0259 0343 0254 0225 0268 0225 0255 0312CB 0188 0201 0282 0257 0158 0327 0161 0196 0297RC 0138 0170 0235 0188 0153 0269 0101 0160 0242SF 0172 0191 0274 0183 0183 0198 0135 0220 0280LRK 0214 0222 0306 0249 0210 0284 0184 0247 0301HS 0161 0181 0269 0227 0185 0299 0102 0157 0283GMR 0125 0163 0237 0189 0165 0270 0074 0138 0261PCA 0188 0209 0291 0206 0194 0231 0153 0225 0274MC 0144 0179 0251 0186 0181 0252 0092 0159 0260DSR 0119 0152 0226 0139 0138 0179 0081 0156 0234RBD 0112 0140 0225 0144 0129 0189 0064 0138 0229DRFIs 0123 0138 0222 0166 0131 0220 0094 0131 0223DRFI 0130 0138 0232 0175 0127 0234 0098 0126 0227

Fig 7 MAE (Mean Absolute Error) scores of different approaches (smaller is better) The best three results arehighlighted with red green and blue fonts respectively

[3] D Batra A Kowdle D Parikh J Luo and T Chen ldquoIn-teractively co-segmentating topically related images with in-telligent scribble guidancerdquo International Journal of ComputerVision vol 93 no 3 pp 273ndash292 2011

[4] S Alpert M Galun R Basri and A Brandt ldquoImage seg-mentation by probabilistic bottom-up aggregation and cueintegrationrdquo in CVPR 2007

[5] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR CVPR 2013 pp 1155ndash1162

[6] C Yang L Zhang H Lu X Ruan and M-H Yang ldquoSaliencydetection via graph-based manifold rankingrdquo in CVPR 2013

[7] R Achanta S S Hemami F J Estrada and S SusstrunkldquoFrequency-tuned salient region detectionrdquo in CVPR 2009

[8] V Movahedi and J H Elder ldquoDesign and perceptual valida-tion of performance measures for salient object segmentationrdquoin POCV 2010

[9] D R Martin C Fowlkes and J Malik ldquoLearning to detectnatural image boundaries using local brightness color andtexture cuesrdquo IEEE Trans Pattern Anal Mach Intell vol 26no 5 pp 530ndash549 2004

[10] Y Wei F Wen W Zhu and J Sun ldquoGeodesic saliency usingbackground priorsrdquo in ECCV (3) 2012 pp 29ndash42

[11] K-Y Chang T-L Liu H-T Chen and S-H Lai ldquoFusinggeneric objectness and visual saliency for salient object de-tectionrdquo in ICCV 2011 pp 914ndash921

[12] S Goferman L Zelnik-Manor and A Tal ldquoContext-awaresaliency detectionrdquo in CVPR 2010 pp 2376ndash2383

[13] H Jiang J Wang Z Yuan T Liu N Zheng and S LildquoAutomatic salient object segmentation based on context andshape priorrdquo in BMVC 2011

[14] M-M Cheng N J Mitra X Huang P H S Torr and S-M Hu ldquoGlobal contrast based salient region detectionrdquo IEEETPAMI 2014

[15] F Perazzi P Krahenbuhl Y Pritch and A Hornung ldquoSaliencyfilters Contrast based filtering for salient region detectionrdquo inCVPR 2012 pp 733ndash740

[16] X Shen and Y Wu ldquoA unified approach to salient objectdetection via low rank matrix recoveryrdquo in CVPR 2012

[17] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR 2013

[18] R Margolin A Tal and L Zelnik-Manor ldquoWhat makes apatch distinctrdquo in CVPR 2013

[19] B Jiang L Zhang H Lu C Yang and M-H Yang ldquoSaliencydetection via absorbing markov chainrdquo in ICCV 2013

[20] X Li H Lu L Zhang X Ruan and M-H Yang ldquoSaliencydetection via dense and sparse reconstructionrdquo in ICCV 2013

[21] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in CVPR 2014

6

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 8 Qualitative comparisons of different salient object detection approaches on the MSRA dataset

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 2: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

2

Fig 2 Average annotation map of the DUT-OMRONdata set

2 ADDITIONAL EVALUATIONS

Extensive evaluations have been performed onsix datasets including MSRA-B1 [2] iCoSeg2 [3]SED23 [4] ECSSD4 [5] DUT-OMRON5 [6] and DUT-OMRON data set

In this section we will provide additional quantita-tive and qualitative comparisons with state-of-the-artapproaches for more comprehensive evaluation

21 DUT-OMRON Data SetIn the paper we sample 635 images from the DUT-OMRON data set (we call it DUT-OMRON data set)where salient objects touch the image border andare far from the image center in order to check therobustness of our approach The average annotation ofsalient objects is shown in Fig 2 As we can see thereis a strong off-center bias Moreover it can be clearlyseen that salient objects touch the image border Thesetwo factors make DUT-OMRON data set challengingfor our approach which is dependent on the pseudo-background assumption and the geometric distribu-tions of salient objects discovered from training im-ages We also present some sample images in Fig 3The complete list of images in DUT-OMRON data setis available at our project webpage jianghzcomdrfi

22 Additional Benchmark Data SetsDue to limited space we only consider six benchmarkdata sets in the paper for evaluations In this sup-plementary material we will provide more results onother three data sets which are widely adopted forsalient object detection evaluation as well

MSRA1k6 This data set [7] containing 1000 im-ages sampled from MSRA-B [2] is the first large-

1 httpresearchmicrosoftcomen-usumpeoplejiansun2 httpchenlabececornelleduprojectstouch-coseg3 httpwwwwisdomweizmannacilsimvisionSeg Evaluation DB4 httpwwwcsecuhkeduhkleojiaprojectshsaliency5 httpicedluteducnluDUT-OMRONhomepagehtm6 httpivrgwwwepflchsupplementary materialRK CVPR09

Fig 3 Demonstration of sample images of the DUT-OMRON dataset

3

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

Recall

Pre

cis

ion

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 4 Quantitative comparisons of saliency maps produced by different approaches on different data sets interms of PR curves

scale data set for salient object detection with pixel-wise groundtruth annotation However performanceof recent approaches start to be saturated on thisbenchmark data set as the background and salientregions are relatively homogeneous and simple Sincesome images of MSRA1k are chosen as training dataof our approach We discard these images for testingphrase There are 396 images left

SED17 It has 100 images containing exactly onesalient object in each image Pixel-wise groundtruthannotations for the salient objects are provided Sim-ilar to MSRA1k it is a relatively simple data set withclean background

SOD8 This data set [8] is a collection of salient objectboundaries based on the Berkeley segmentation dataset [9] Seven subjects are asked to choose the salientobject(s) in 300 images We generate the pixel-wiseannotation of the salient objects as [10] This data setcontains many images with multiple objects making

7 httpwwwwisdomweizmannacilsimvisionSeg Evaluation DB8 httpelderlabyorkucaSOD

it challenging

23 Additional Quantitative ComparisonsIn the paper we provide the quantitative comparisonsof our approach (DRFIs and DRFI) on six benchmarkdata sets In this section we provide additional resultson MSRA1k SED1 and SOD data sets In additionto the PR curve ROC curve and AUC (Area UnderROC Curve) scores we also report the MAE (MeanAbsolute Error) scores of each approach MAE scoredirectly reflects the mean absolute difference of thegroundtruth annotation and a saliency map

Similar to the paper we compare our method with12 state-of-the-art approaches including SVO [11]CA [12] CB [13] RC [14] SF [15] LRK [16] HS [17]GMR [6] PCA [18] MC [19] DSR [20] and RBD [21]PR curves and ROC curves are plotted in Fig 4 andFig 5 respectively AUC scores and MAE scores arepresented in Fig 6 and Fig 7

As can be seen our approach DRFIs is slightly bet-ter than most of state-of-the-art methods on MSRA1kand SED1 according to either PR or ROC curves

4

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 5 Quantitative comparison of saliency maps produced by different approaches on different data sets interms of ROC curves

With multi-level enhancement our approach DRFIperforms much better On the challenging SOD dataset DRFIs and DRFI both significantly outperformother methods Regarding AUC scores DRFI per-forms consistently the best on MSRA1k SED1 andSOD and DRFIs is ranked as the second best on SED1and SOD

Since our approach computes the saliency scorefor each region independently the smoothness con-straint of adjacent regions is ignored Therefore ourapproach does not perform in terms of MAE scores aswell as in terms of AUC scores In specific DRFI stillperforms the best on three benchmark data sets andthe thrid best on SOD DRFIs is ranked as the best ontwo data sets the second best on three data sets andthe third best on two data sets

Please note more sophisticated post-processingsteps are utilized in other top-performing approachesFor instance DSR adopts Bayesian integration (in ad-dition to the multi-level enhancement) The quadraticand manifold diffusion methods are adopted by RBDand GMR respectively As we stated in the paper

about our future work better MAE scores can be ex-pected if more advanced post-processing is integrated

24 Additional Qualitative ComparisonsIn this supplementary material we present morequalitative comparisons of different approaches onall benchmark data sets Saliency maps of randomlychosen images of each benchmark data set is shownfrom Fig 8 to Fig 16

Generally our approach can produce more appeal-ing saliency maps than other approaches See thesecond and six rows of Fig 8 For the extremelychallenging DUT-OMRON data set our approachcan generate slightly better saliency maps than othersin most cases See Fig 13 for examples

REFERENCES[1] H Jiang J Wang Z Yuan Y Wu N Zheng and S Li ldquoSalient

object detection A discriminative regional feature integrationapproachrdquo in IEEE CVPR 2013 pp 2083ndash2090

[2] T Liu Z Yuan J Sun J Wang N Zheng X Tang and H-Y Shum ldquoLearning to detect a salient objectrdquo IEEE TPAMIvol 33 no 2 pp 353ndash367 2011

5

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0899 0861 0799 0866 0834 0793 0954 0862 0777CA 0860 0837 0738 0815 0854 0760 0888 0861 0782CB 0930 0852 0819 0831 0825 0624 0959 0880 0737RC 0937 0880 0833 0859 0840 0679 0976 0872 0821SF 0917 0911 0777 0803 0872 0715 0962 0879 0763LRK 0925 0908 0810 0859 0881 0758 0958 0879 0790HS 0930 0882 0829 0860 0820 0735 0971 0910 0811GMR 0942 0902 0834 0853 0831 0646 0976 0919 0795PCA 0938 0895 0817 0887 0903 0776 0975 0905 0822MC 0951 0898 0849 0887 0863 0715 0978 0929 0838DSR 0956 0921 0856 0899 0895 0776 0982 0927 0841RBD 0945 0941 0840 0894 0873 0779 0980 0915 0820DRFIs 0954 0944 0858 0910 0902 0804 0976 0951 0854DRFI 0971 0968 0875 0931 0933 0822 0988 0972 0883

Fig 6 AUC (Area Under ROC Curve) scores of different approaches (larger is better) The best three resultsare highlighted with red green and blue fonts respectively

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0349 0328 0420 0409 0308 0445 0332 0339 0394CA 0247 0259 0343 0254 0225 0268 0225 0255 0312CB 0188 0201 0282 0257 0158 0327 0161 0196 0297RC 0138 0170 0235 0188 0153 0269 0101 0160 0242SF 0172 0191 0274 0183 0183 0198 0135 0220 0280LRK 0214 0222 0306 0249 0210 0284 0184 0247 0301HS 0161 0181 0269 0227 0185 0299 0102 0157 0283GMR 0125 0163 0237 0189 0165 0270 0074 0138 0261PCA 0188 0209 0291 0206 0194 0231 0153 0225 0274MC 0144 0179 0251 0186 0181 0252 0092 0159 0260DSR 0119 0152 0226 0139 0138 0179 0081 0156 0234RBD 0112 0140 0225 0144 0129 0189 0064 0138 0229DRFIs 0123 0138 0222 0166 0131 0220 0094 0131 0223DRFI 0130 0138 0232 0175 0127 0234 0098 0126 0227

Fig 7 MAE (Mean Absolute Error) scores of different approaches (smaller is better) The best three results arehighlighted with red green and blue fonts respectively

[3] D Batra A Kowdle D Parikh J Luo and T Chen ldquoIn-teractively co-segmentating topically related images with in-telligent scribble guidancerdquo International Journal of ComputerVision vol 93 no 3 pp 273ndash292 2011

[4] S Alpert M Galun R Basri and A Brandt ldquoImage seg-mentation by probabilistic bottom-up aggregation and cueintegrationrdquo in CVPR 2007

[5] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR CVPR 2013 pp 1155ndash1162

[6] C Yang L Zhang H Lu X Ruan and M-H Yang ldquoSaliencydetection via graph-based manifold rankingrdquo in CVPR 2013

[7] R Achanta S S Hemami F J Estrada and S SusstrunkldquoFrequency-tuned salient region detectionrdquo in CVPR 2009

[8] V Movahedi and J H Elder ldquoDesign and perceptual valida-tion of performance measures for salient object segmentationrdquoin POCV 2010

[9] D R Martin C Fowlkes and J Malik ldquoLearning to detectnatural image boundaries using local brightness color andtexture cuesrdquo IEEE Trans Pattern Anal Mach Intell vol 26no 5 pp 530ndash549 2004

[10] Y Wei F Wen W Zhu and J Sun ldquoGeodesic saliency usingbackground priorsrdquo in ECCV (3) 2012 pp 29ndash42

[11] K-Y Chang T-L Liu H-T Chen and S-H Lai ldquoFusinggeneric objectness and visual saliency for salient object de-tectionrdquo in ICCV 2011 pp 914ndash921

[12] S Goferman L Zelnik-Manor and A Tal ldquoContext-awaresaliency detectionrdquo in CVPR 2010 pp 2376ndash2383

[13] H Jiang J Wang Z Yuan T Liu N Zheng and S LildquoAutomatic salient object segmentation based on context andshape priorrdquo in BMVC 2011

[14] M-M Cheng N J Mitra X Huang P H S Torr and S-M Hu ldquoGlobal contrast based salient region detectionrdquo IEEETPAMI 2014

[15] F Perazzi P Krahenbuhl Y Pritch and A Hornung ldquoSaliencyfilters Contrast based filtering for salient region detectionrdquo inCVPR 2012 pp 733ndash740

[16] X Shen and Y Wu ldquoA unified approach to salient objectdetection via low rank matrix recoveryrdquo in CVPR 2012

[17] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR 2013

[18] R Margolin A Tal and L Zelnik-Manor ldquoWhat makes apatch distinctrdquo in CVPR 2013

[19] B Jiang L Zhang H Lu C Yang and M-H Yang ldquoSaliencydetection via absorbing markov chainrdquo in ICCV 2013

[20] X Li H Lu L Zhang X Ruan and M-H Yang ldquoSaliencydetection via dense and sparse reconstructionrdquo in ICCV 2013

[21] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in CVPR 2014

6

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 8 Qualitative comparisons of different salient object detection approaches on the MSRA dataset

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 3: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

3

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

Recall

Pre

cis

ion

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

Recall

Pre

cis

ion

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 4 Quantitative comparisons of saliency maps produced by different approaches on different data sets interms of PR curves

scale data set for salient object detection with pixel-wise groundtruth annotation However performanceof recent approaches start to be saturated on thisbenchmark data set as the background and salientregions are relatively homogeneous and simple Sincesome images of MSRA1k are chosen as training dataof our approach We discard these images for testingphrase There are 396 images left

SED17 It has 100 images containing exactly onesalient object in each image Pixel-wise groundtruthannotations for the salient objects are provided Sim-ilar to MSRA1k it is a relatively simple data set withclean background

SOD8 This data set [8] is a collection of salient objectboundaries based on the Berkeley segmentation dataset [9] Seven subjects are asked to choose the salientobject(s) in 300 images We generate the pixel-wiseannotation of the salient objects as [10] This data setcontains many images with multiple objects making

7 httpwwwwisdomweizmannacilsimvisionSeg Evaluation DB8 httpelderlabyorkucaSOD

it challenging

23 Additional Quantitative ComparisonsIn the paper we provide the quantitative comparisonsof our approach (DRFIs and DRFI) on six benchmarkdata sets In this section we provide additional resultson MSRA1k SED1 and SOD data sets In additionto the PR curve ROC curve and AUC (Area UnderROC Curve) scores we also report the MAE (MeanAbsolute Error) scores of each approach MAE scoredirectly reflects the mean absolute difference of thegroundtruth annotation and a saliency map

Similar to the paper we compare our method with12 state-of-the-art approaches including SVO [11]CA [12] CB [13] RC [14] SF [15] LRK [16] HS [17]GMR [6] PCA [18] MC [19] DSR [20] and RBD [21]PR curves and ROC curves are plotted in Fig 4 andFig 5 respectively AUC scores and MAE scores arepresented in Fig 6 and Fig 7

As can be seen our approach DRFIs is slightly bet-ter than most of state-of-the-art methods on MSRA1kand SED1 according to either PR or ROC curves

4

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 5 Quantitative comparison of saliency maps produced by different approaches on different data sets interms of ROC curves

With multi-level enhancement our approach DRFIperforms much better On the challenging SOD dataset DRFIs and DRFI both significantly outperformother methods Regarding AUC scores DRFI per-forms consistently the best on MSRA1k SED1 andSOD and DRFIs is ranked as the second best on SED1and SOD

Since our approach computes the saliency scorefor each region independently the smoothness con-straint of adjacent regions is ignored Therefore ourapproach does not perform in terms of MAE scores aswell as in terms of AUC scores In specific DRFI stillperforms the best on three benchmark data sets andthe thrid best on SOD DRFIs is ranked as the best ontwo data sets the second best on three data sets andthe third best on two data sets

Please note more sophisticated post-processingsteps are utilized in other top-performing approachesFor instance DSR adopts Bayesian integration (in ad-dition to the multi-level enhancement) The quadraticand manifold diffusion methods are adopted by RBDand GMR respectively As we stated in the paper

about our future work better MAE scores can be ex-pected if more advanced post-processing is integrated

24 Additional Qualitative ComparisonsIn this supplementary material we present morequalitative comparisons of different approaches onall benchmark data sets Saliency maps of randomlychosen images of each benchmark data set is shownfrom Fig 8 to Fig 16

Generally our approach can produce more appeal-ing saliency maps than other approaches See thesecond and six rows of Fig 8 For the extremelychallenging DUT-OMRON data set our approachcan generate slightly better saliency maps than othersin most cases See Fig 13 for examples

REFERENCES[1] H Jiang J Wang Z Yuan Y Wu N Zheng and S Li ldquoSalient

object detection A discriminative regional feature integrationapproachrdquo in IEEE CVPR 2013 pp 2083ndash2090

[2] T Liu Z Yuan J Sun J Wang N Zheng X Tang and H-Y Shum ldquoLearning to detect a salient objectrdquo IEEE TPAMIvol 33 no 2 pp 353ndash367 2011

5

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0899 0861 0799 0866 0834 0793 0954 0862 0777CA 0860 0837 0738 0815 0854 0760 0888 0861 0782CB 0930 0852 0819 0831 0825 0624 0959 0880 0737RC 0937 0880 0833 0859 0840 0679 0976 0872 0821SF 0917 0911 0777 0803 0872 0715 0962 0879 0763LRK 0925 0908 0810 0859 0881 0758 0958 0879 0790HS 0930 0882 0829 0860 0820 0735 0971 0910 0811GMR 0942 0902 0834 0853 0831 0646 0976 0919 0795PCA 0938 0895 0817 0887 0903 0776 0975 0905 0822MC 0951 0898 0849 0887 0863 0715 0978 0929 0838DSR 0956 0921 0856 0899 0895 0776 0982 0927 0841RBD 0945 0941 0840 0894 0873 0779 0980 0915 0820DRFIs 0954 0944 0858 0910 0902 0804 0976 0951 0854DRFI 0971 0968 0875 0931 0933 0822 0988 0972 0883

Fig 6 AUC (Area Under ROC Curve) scores of different approaches (larger is better) The best three resultsare highlighted with red green and blue fonts respectively

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0349 0328 0420 0409 0308 0445 0332 0339 0394CA 0247 0259 0343 0254 0225 0268 0225 0255 0312CB 0188 0201 0282 0257 0158 0327 0161 0196 0297RC 0138 0170 0235 0188 0153 0269 0101 0160 0242SF 0172 0191 0274 0183 0183 0198 0135 0220 0280LRK 0214 0222 0306 0249 0210 0284 0184 0247 0301HS 0161 0181 0269 0227 0185 0299 0102 0157 0283GMR 0125 0163 0237 0189 0165 0270 0074 0138 0261PCA 0188 0209 0291 0206 0194 0231 0153 0225 0274MC 0144 0179 0251 0186 0181 0252 0092 0159 0260DSR 0119 0152 0226 0139 0138 0179 0081 0156 0234RBD 0112 0140 0225 0144 0129 0189 0064 0138 0229DRFIs 0123 0138 0222 0166 0131 0220 0094 0131 0223DRFI 0130 0138 0232 0175 0127 0234 0098 0126 0227

Fig 7 MAE (Mean Absolute Error) scores of different approaches (smaller is better) The best three results arehighlighted with red green and blue fonts respectively

[3] D Batra A Kowdle D Parikh J Luo and T Chen ldquoIn-teractively co-segmentating topically related images with in-telligent scribble guidancerdquo International Journal of ComputerVision vol 93 no 3 pp 273ndash292 2011

[4] S Alpert M Galun R Basri and A Brandt ldquoImage seg-mentation by probabilistic bottom-up aggregation and cueintegrationrdquo in CVPR 2007

[5] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR CVPR 2013 pp 1155ndash1162

[6] C Yang L Zhang H Lu X Ruan and M-H Yang ldquoSaliencydetection via graph-based manifold rankingrdquo in CVPR 2013

[7] R Achanta S S Hemami F J Estrada and S SusstrunkldquoFrequency-tuned salient region detectionrdquo in CVPR 2009

[8] V Movahedi and J H Elder ldquoDesign and perceptual valida-tion of performance measures for salient object segmentationrdquoin POCV 2010

[9] D R Martin C Fowlkes and J Malik ldquoLearning to detectnatural image boundaries using local brightness color andtexture cuesrdquo IEEE Trans Pattern Anal Mach Intell vol 26no 5 pp 530ndash549 2004

[10] Y Wei F Wen W Zhu and J Sun ldquoGeodesic saliency usingbackground priorsrdquo in ECCV (3) 2012 pp 29ndash42

[11] K-Y Chang T-L Liu H-T Chen and S-H Lai ldquoFusinggeneric objectness and visual saliency for salient object de-tectionrdquo in ICCV 2011 pp 914ndash921

[12] S Goferman L Zelnik-Manor and A Tal ldquoContext-awaresaliency detectionrdquo in CVPR 2010 pp 2376ndash2383

[13] H Jiang J Wang Z Yuan T Liu N Zheng and S LildquoAutomatic salient object segmentation based on context andshape priorrdquo in BMVC 2011

[14] M-M Cheng N J Mitra X Huang P H S Torr and S-M Hu ldquoGlobal contrast based salient region detectionrdquo IEEETPAMI 2014

[15] F Perazzi P Krahenbuhl Y Pritch and A Hornung ldquoSaliencyfilters Contrast based filtering for salient region detectionrdquo inCVPR 2012 pp 733ndash740

[16] X Shen and Y Wu ldquoA unified approach to salient objectdetection via low rank matrix recoveryrdquo in CVPR 2012

[17] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR 2013

[18] R Margolin A Tal and L Zelnik-Manor ldquoWhat makes apatch distinctrdquo in CVPR 2013

[19] B Jiang L Zhang H Lu C Yang and M-H Yang ldquoSaliencydetection via absorbing markov chainrdquo in ICCV 2013

[20] X Li H Lu L Zhang X Ruan and M-H Yang ldquoSaliencydetection via dense and sparse reconstructionrdquo in ICCV 2013

[21] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in CVPR 2014

6

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 8 Qualitative comparisons of different salient object detection approaches on the MSRA dataset

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 4: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

4

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

iCoSeg

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

ECSSD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED2

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

DUTminusOMRON

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

MSRA1k

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SED1

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False Positive Rate

Tru

e P

ositiv

e R

ate

SOD

SVO

CA

CB

RC

SF

LRK

HS

GMR

PCA

MC

DSR

RBD

DRFIs

DRFI

Fig 5 Quantitative comparison of saliency maps produced by different approaches on different data sets interms of ROC curves

With multi-level enhancement our approach DRFIperforms much better On the challenging SOD dataset DRFIs and DRFI both significantly outperformother methods Regarding AUC scores DRFI per-forms consistently the best on MSRA1k SED1 andSOD and DRFIs is ranked as the second best on SED1and SOD

Since our approach computes the saliency scorefor each region independently the smoothness con-straint of adjacent regions is ignored Therefore ourapproach does not perform in terms of MAE scores aswell as in terms of AUC scores In specific DRFI stillperforms the best on three benchmark data sets andthe thrid best on SOD DRFIs is ranked as the best ontwo data sets the second best on three data sets andthe third best on two data sets

Please note more sophisticated post-processingsteps are utilized in other top-performing approachesFor instance DSR adopts Bayesian integration (in ad-dition to the multi-level enhancement) The quadraticand manifold diffusion methods are adopted by RBDand GMR respectively As we stated in the paper

about our future work better MAE scores can be ex-pected if more advanced post-processing is integrated

24 Additional Qualitative ComparisonsIn this supplementary material we present morequalitative comparisons of different approaches onall benchmark data sets Saliency maps of randomlychosen images of each benchmark data set is shownfrom Fig 8 to Fig 16

Generally our approach can produce more appeal-ing saliency maps than other approaches See thesecond and six rows of Fig 8 For the extremelychallenging DUT-OMRON data set our approachcan generate slightly better saliency maps than othersin most cases See Fig 13 for examples

REFERENCES[1] H Jiang J Wang Z Yuan Y Wu N Zheng and S Li ldquoSalient

object detection A discriminative regional feature integrationapproachrdquo in IEEE CVPR 2013 pp 2083ndash2090

[2] T Liu Z Yuan J Sun J Wang N Zheng X Tang and H-Y Shum ldquoLearning to detect a salient objectrdquo IEEE TPAMIvol 33 no 2 pp 353ndash367 2011

5

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0899 0861 0799 0866 0834 0793 0954 0862 0777CA 0860 0837 0738 0815 0854 0760 0888 0861 0782CB 0930 0852 0819 0831 0825 0624 0959 0880 0737RC 0937 0880 0833 0859 0840 0679 0976 0872 0821SF 0917 0911 0777 0803 0872 0715 0962 0879 0763LRK 0925 0908 0810 0859 0881 0758 0958 0879 0790HS 0930 0882 0829 0860 0820 0735 0971 0910 0811GMR 0942 0902 0834 0853 0831 0646 0976 0919 0795PCA 0938 0895 0817 0887 0903 0776 0975 0905 0822MC 0951 0898 0849 0887 0863 0715 0978 0929 0838DSR 0956 0921 0856 0899 0895 0776 0982 0927 0841RBD 0945 0941 0840 0894 0873 0779 0980 0915 0820DRFIs 0954 0944 0858 0910 0902 0804 0976 0951 0854DRFI 0971 0968 0875 0931 0933 0822 0988 0972 0883

Fig 6 AUC (Area Under ROC Curve) scores of different approaches (larger is better) The best three resultsare highlighted with red green and blue fonts respectively

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0349 0328 0420 0409 0308 0445 0332 0339 0394CA 0247 0259 0343 0254 0225 0268 0225 0255 0312CB 0188 0201 0282 0257 0158 0327 0161 0196 0297RC 0138 0170 0235 0188 0153 0269 0101 0160 0242SF 0172 0191 0274 0183 0183 0198 0135 0220 0280LRK 0214 0222 0306 0249 0210 0284 0184 0247 0301HS 0161 0181 0269 0227 0185 0299 0102 0157 0283GMR 0125 0163 0237 0189 0165 0270 0074 0138 0261PCA 0188 0209 0291 0206 0194 0231 0153 0225 0274MC 0144 0179 0251 0186 0181 0252 0092 0159 0260DSR 0119 0152 0226 0139 0138 0179 0081 0156 0234RBD 0112 0140 0225 0144 0129 0189 0064 0138 0229DRFIs 0123 0138 0222 0166 0131 0220 0094 0131 0223DRFI 0130 0138 0232 0175 0127 0234 0098 0126 0227

Fig 7 MAE (Mean Absolute Error) scores of different approaches (smaller is better) The best three results arehighlighted with red green and blue fonts respectively

[3] D Batra A Kowdle D Parikh J Luo and T Chen ldquoIn-teractively co-segmentating topically related images with in-telligent scribble guidancerdquo International Journal of ComputerVision vol 93 no 3 pp 273ndash292 2011

[4] S Alpert M Galun R Basri and A Brandt ldquoImage seg-mentation by probabilistic bottom-up aggregation and cueintegrationrdquo in CVPR 2007

[5] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR CVPR 2013 pp 1155ndash1162

[6] C Yang L Zhang H Lu X Ruan and M-H Yang ldquoSaliencydetection via graph-based manifold rankingrdquo in CVPR 2013

[7] R Achanta S S Hemami F J Estrada and S SusstrunkldquoFrequency-tuned salient region detectionrdquo in CVPR 2009

[8] V Movahedi and J H Elder ldquoDesign and perceptual valida-tion of performance measures for salient object segmentationrdquoin POCV 2010

[9] D R Martin C Fowlkes and J Malik ldquoLearning to detectnatural image boundaries using local brightness color andtexture cuesrdquo IEEE Trans Pattern Anal Mach Intell vol 26no 5 pp 530ndash549 2004

[10] Y Wei F Wen W Zhu and J Sun ldquoGeodesic saliency usingbackground priorsrdquo in ECCV (3) 2012 pp 29ndash42

[11] K-Y Chang T-L Liu H-T Chen and S-H Lai ldquoFusinggeneric objectness and visual saliency for salient object de-tectionrdquo in ICCV 2011 pp 914ndash921

[12] S Goferman L Zelnik-Manor and A Tal ldquoContext-awaresaliency detectionrdquo in CVPR 2010 pp 2376ndash2383

[13] H Jiang J Wang Z Yuan T Liu N Zheng and S LildquoAutomatic salient object segmentation based on context andshape priorrdquo in BMVC 2011

[14] M-M Cheng N J Mitra X Huang P H S Torr and S-M Hu ldquoGlobal contrast based salient region detectionrdquo IEEETPAMI 2014

[15] F Perazzi P Krahenbuhl Y Pritch and A Hornung ldquoSaliencyfilters Contrast based filtering for salient region detectionrdquo inCVPR 2012 pp 733ndash740

[16] X Shen and Y Wu ldquoA unified approach to salient objectdetection via low rank matrix recoveryrdquo in CVPR 2012

[17] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR 2013

[18] R Margolin A Tal and L Zelnik-Manor ldquoWhat makes apatch distinctrdquo in CVPR 2013

[19] B Jiang L Zhang H Lu C Yang and M-H Yang ldquoSaliencydetection via absorbing markov chainrdquo in ICCV 2013

[20] X Li H Lu L Zhang X Ruan and M-H Yang ldquoSaliencydetection via dense and sparse reconstructionrdquo in ICCV 2013

[21] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in CVPR 2014

6

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 8 Qualitative comparisons of different salient object detection approaches on the MSRA dataset

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 5: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

5

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0899 0861 0799 0866 0834 0793 0954 0862 0777CA 0860 0837 0738 0815 0854 0760 0888 0861 0782CB 0930 0852 0819 0831 0825 0624 0959 0880 0737RC 0937 0880 0833 0859 0840 0679 0976 0872 0821SF 0917 0911 0777 0803 0872 0715 0962 0879 0763LRK 0925 0908 0810 0859 0881 0758 0958 0879 0790HS 0930 0882 0829 0860 0820 0735 0971 0910 0811GMR 0942 0902 0834 0853 0831 0646 0976 0919 0795PCA 0938 0895 0817 0887 0903 0776 0975 0905 0822MC 0951 0898 0849 0887 0863 0715 0978 0929 0838DSR 0956 0921 0856 0899 0895 0776 0982 0927 0841RBD 0945 0941 0840 0894 0873 0779 0980 0915 0820DRFIs 0954 0944 0858 0910 0902 0804 0976 0951 0854DRFI 0971 0968 0875 0931 0933 0822 0988 0972 0883

Fig 6 AUC (Area Under ROC Curve) scores of different approaches (larger is better) The best three resultsare highlighted with red green and blue fonts respectively

MSRA-B iCoSeg ECSSD DUT-OMRON SED2 DUT-OMRON MSRA1k SED1 SOD

SVO 0349 0328 0420 0409 0308 0445 0332 0339 0394CA 0247 0259 0343 0254 0225 0268 0225 0255 0312CB 0188 0201 0282 0257 0158 0327 0161 0196 0297RC 0138 0170 0235 0188 0153 0269 0101 0160 0242SF 0172 0191 0274 0183 0183 0198 0135 0220 0280LRK 0214 0222 0306 0249 0210 0284 0184 0247 0301HS 0161 0181 0269 0227 0185 0299 0102 0157 0283GMR 0125 0163 0237 0189 0165 0270 0074 0138 0261PCA 0188 0209 0291 0206 0194 0231 0153 0225 0274MC 0144 0179 0251 0186 0181 0252 0092 0159 0260DSR 0119 0152 0226 0139 0138 0179 0081 0156 0234RBD 0112 0140 0225 0144 0129 0189 0064 0138 0229DRFIs 0123 0138 0222 0166 0131 0220 0094 0131 0223DRFI 0130 0138 0232 0175 0127 0234 0098 0126 0227

Fig 7 MAE (Mean Absolute Error) scores of different approaches (smaller is better) The best three results arehighlighted with red green and blue fonts respectively

[3] D Batra A Kowdle D Parikh J Luo and T Chen ldquoIn-teractively co-segmentating topically related images with in-telligent scribble guidancerdquo International Journal of ComputerVision vol 93 no 3 pp 273ndash292 2011

[4] S Alpert M Galun R Basri and A Brandt ldquoImage seg-mentation by probabilistic bottom-up aggregation and cueintegrationrdquo in CVPR 2007

[5] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR CVPR 2013 pp 1155ndash1162

[6] C Yang L Zhang H Lu X Ruan and M-H Yang ldquoSaliencydetection via graph-based manifold rankingrdquo in CVPR 2013

[7] R Achanta S S Hemami F J Estrada and S SusstrunkldquoFrequency-tuned salient region detectionrdquo in CVPR 2009

[8] V Movahedi and J H Elder ldquoDesign and perceptual valida-tion of performance measures for salient object segmentationrdquoin POCV 2010

[9] D R Martin C Fowlkes and J Malik ldquoLearning to detectnatural image boundaries using local brightness color andtexture cuesrdquo IEEE Trans Pattern Anal Mach Intell vol 26no 5 pp 530ndash549 2004

[10] Y Wei F Wen W Zhu and J Sun ldquoGeodesic saliency usingbackground priorsrdquo in ECCV (3) 2012 pp 29ndash42

[11] K-Y Chang T-L Liu H-T Chen and S-H Lai ldquoFusinggeneric objectness and visual saliency for salient object de-tectionrdquo in ICCV 2011 pp 914ndash921

[12] S Goferman L Zelnik-Manor and A Tal ldquoContext-awaresaliency detectionrdquo in CVPR 2010 pp 2376ndash2383

[13] H Jiang J Wang Z Yuan T Liu N Zheng and S LildquoAutomatic salient object segmentation based on context andshape priorrdquo in BMVC 2011

[14] M-M Cheng N J Mitra X Huang P H S Torr and S-M Hu ldquoGlobal contrast based salient region detectionrdquo IEEETPAMI 2014

[15] F Perazzi P Krahenbuhl Y Pritch and A Hornung ldquoSaliencyfilters Contrast based filtering for salient region detectionrdquo inCVPR 2012 pp 733ndash740

[16] X Shen and Y Wu ldquoA unified approach to salient objectdetection via low rank matrix recoveryrdquo in CVPR 2012

[17] Q Yan L Xu J Shi and J Jia ldquoHierarchical saliency detec-tionrdquo in CVPR 2013

[18] R Margolin A Tal and L Zelnik-Manor ldquoWhat makes apatch distinctrdquo in CVPR 2013

[19] B Jiang L Zhang H Lu C Yang and M-H Yang ldquoSaliencydetection via absorbing markov chainrdquo in ICCV 2013

[20] X Li H Lu L Zhang X Ruan and M-H Yang ldquoSaliencydetection via dense and sparse reconstructionrdquo in ICCV 2013

[21] W Zhu S Liang Y Wei and J Sun ldquoSaliency optimizationfrom robust background detectionrdquo in CVPR 2014

6

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 8 Qualitative comparisons of different salient object detection approaches on the MSRA dataset

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 6: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

6

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 8 Qualitative comparisons of different salient object detection approaches on the MSRA dataset

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 7: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

7

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 9 Qualitative comparisons of different salient object detection approaches on the iCoSeg dataset

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 8: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

8

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 10 Qualitative comparisons of different salient object detection approaches on the ECSSD dataset

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 9: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

9

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 11 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 10: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

10

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 12 Qualitative comparisons of different salient object detection approaches on the SED2 dataset

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 11: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

11

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 13 Qualitative comparisons of different salient object detection approaches on the DUT-OMRON dataset

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 12: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

12

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 14 Qualitative comparisons of different salient object detection approaches on the MSRA1k dataset

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 13: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

13

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 15 Qualitative comparisons of different salient object detection approaches on the SED1 dataset

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References
Page 14: Supplementary Materials for ‘Salient Object Detection: A ...jianghz.me/drfi/files/SuppDRFI.pdfsalient object in each image. Pixel-wise groundtruth annotations for the salient objects

14

(a) input (b) SVO (c) CA (d) CB (e) RC (f) SF (g) LRK (h) HS (i) GMR (j) PCA (k) MC (l) DSR (m) RBD (n) DRFIs (o) DRFI

Fig 16 Qualitative comparisons of different salient object detection approaches on the SOD dataset

  • 1 Learning
    • 11 Learning a Similarity Score between Two Adjacent Superpixels
    • 12 Feature importance in a Random Forest
      • 2 Additional Evaluations
        • 21 DUT-OMRON Data Set
        • 22 Additional Benchmark Data Sets
        • 23 Additional Quantitative Comparisons
        • 24 Additional Qualitative Comparisons
          • References