Top Banner

of 15

Object saliency

Apr 02, 2018

Download

Documents

Abhijit Sharang
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/27/2019 Object saliency

    1/15

    Object co-segmentation acrossmultiple images using saliency

    Abhijit SharangMohd. Dawood

  • 7/27/2019 Object saliency

    2/15

    Introduction

    Co-segmentation aims to segment common objects from a collection of

    images given by the user.

    Compared with traditional segmentation methods, co-segmentation can

    accurately segment common objects from images by several relatedimages.

    The task is less cumbersome and requires lesser supervision.

  • 7/27/2019 Object saliency

    3/15

    Past work

    Most existing co-segmentation methods take foreground similarity under

    consideration

    [Rother et.al,06],[Singh et.al,09] and [Hochbaum et al.,09] model co-

    segmentation as an optimisation problem with added constraints on

    foreground similarity.

    Special clustering and discriminative clustering were combined by [Joulin

    et al.,10] to perform co-segmentation

    In order to segment common objects,[Vicente et al.,11] selected useful

    features from a total of 33 features through random forest regressor.

    [Kim et al.,11]proposed a diffusion-based optimization framework whichused anisotropic heat diffusion method to locate seeded points of the

    common objects and employed random walks segmentation method for

    common objects segmentation.

  • 7/27/2019 Object saliency

    4/15

    Motivation for using saliency

    The existing methods are based on similarity of features.

    What if the background across the images also match?

    Saliency becomes useful under such a case as low score is attached to

    background

    Moreover, the saliency of the common object can be boosted by taking

    into account the saliency of similar objects across the images.

  • 7/27/2019 Object saliency

    5/15

    The objective and methodology

    Two fold objective: improving the saliency map by using information fromother images and using the improved saliency map for further

    segmentation

    Normal saliency map is constructed using the HC method discussed by

    [Cheng et al.,11].

    This map is used to construct the co-saliency map, where the co-saliencyof a colour kin image I is defined as:

    ik = ik + ik ,

    where,

    ik = ( (

    =

    =1

    =

    =1 ,) ),m=number of images except the current image

    p=number of colours in thejthimage

    d(Cik ,Cjl) is the distance between the colours (i,k) and ( j,l)

    jl

    is the HC saliency value of the colour (j,l)

  • 7/27/2019 Object saliency

    6/15

    Saliency comparison HC Saliency

    Co-saliency

  • 7/27/2019 Object saliency

    7/15

    Methodology(cont.)

    Having obtained the co-saliency map for the images, we divide the images

    into regions.

    We then compute the region-wise saliency using the saliency of the

    colours contained in the image, assigning higher weight to salient

    colours.

    regions

    -----------------------

  • 7/27/2019 Object saliency

    8/15

    Methodology(cont..) The salient object in the image might be fragmented into several

    regions.

    We perform region merging based on the idea that regions having

    similar ratio of the salient pixels and normal pixels have a high

    probability of belonging to the same object.

    The saliency of the merged region thus obtained is so high that we can

    perform a greedy selection of the region as the most salient region in the

    image

    Segmented Object

  • 7/27/2019 Object saliency

    9/15

    Subjective results

    Windmill (ICoseg dataset)

  • 7/27/2019 Object saliency

    10/15

    Subjective results(cont..)

    Pyramids(ICoseg dataset)

  • 7/27/2019 Object saliency

    11/15

    Subjective results(cont..)Statue of Liberty(ICoseg dataset)

  • 7/27/2019 Object saliency

    12/15

    Subjective results(cont..)

    Kendo(ICoseg dataset)

  • 7/27/2019 Object saliency

    13/15

    Objective results

  • 7/27/2019 Object saliency

    14/15

    To do list..

    Improving the region merging algorithm

    Computing F-scores and Precision-Recall curves Testing the algorithm on more categories from ICoseg dataset and MSRC

    dataset

  • 7/27/2019 Object saliency

    15/15

    References Meng, F.; Li, H.; Liu, G.; Ngan, K. N.; , "Object Co-Segmentation Based on Shortest Path Algorithm

    and Saliency Model," Multimedia, IEEE Transactions on , vol.14, no.5, pp.1429-1441, Oct. 2012

    C. Rother, V. Kolmogorov, T. Minka, and A. Blake, "Co-segmentation of image pairs by histogram

    matching-incorporating a global constraint into mrfs," in Proc. IEEE Conf. Computer Vision and

    Pattern Recognition, Jun. 2006, pp. 993-1000.

    L. Mukherjee, V. Singh, and C. R. Dyer, "Half-integrality based algorithms for co-segmentation of

    images," in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2009, pp. 2028-2035.

    D. S. Hochbaum and V. Singh, "An efficient algorithm for co-segmentation," in Proc. Int. Conf.

    Computer Vision, Oct. 2009, pp. 269-276.

    A. Joulin, F. Bach, and J. Ponce, "Discriminative clustering for image co-segmentation," in Proc.

    IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2010, pp. 1943-1950.

    G. Kim, E. P. Xing, L. Fei-Fei, and T. Kanade, "Distributed co-segmentation via submodular

    optimization on anisotropic diffusion," in Proc. Int. Conf. Computer Vision, Nov. 2011, pp. 169-176.

    S. Vicente, C. Rother, and V. Kolmogorov, "Object co-segmentation," in Proc. IEEE Conf. Computer

    Vision and Pattern Recognition, Jun. 2011, pp. 2217-2224.nition, Jun. 2011, pp. 2129-2136.

    D. Batra, A. Kowdle, and D. Parikh, "ICoseg: Interactive co-segmentation with intelligent scribble

    guidance," in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Jun. 2010, pp. 3169-

    3176.