Saliency Detection via Manifold Ranking on Multi-Layer Graph

Saliency detection is increasingly a crucial task in the computer vision area. In previous graph-based saliency detection, superpixels are usually regarded as the primary processing units to enhance computational efficiency. Nevertheless, most methods do not take into account the potential impact of...

Full description

Bibliographic Details
Main Authors: Suwei Wang, Yang Ning, Xuemei Li, Caiming Zhang
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10375386/
Description
Summary:Saliency detection is increasingly a crucial task in the computer vision area. In previous graph-based saliency detection, superpixels are usually regarded as the primary processing units to enhance computational efficiency. Nevertheless, most methods do not take into account the potential impact of errors in superpixel segmentation, which may result in incorrect saliency values. To address this issue, we propose a novel approach that leverages the diversity of superpixel algorithms and constructs a multi-layer graph. Specifically, we segment the input image into multiple sets by different superpixel algorithms. Through connections within and connections between these superpixel sets, we can mitigate the errors caused by individual algorithms through collaborative solutions. In addition to spatial proximity, we also consider feature similarity in the process of graph construction. Connecting superpixels that are similar in feature space can force them to obtain consistent saliency values, thus addressing challenges brought by the scattered spatial distribution and the uneven internal appearance of salient objects. Additionally, we use the two-stage manifold ranking to compute the saliency value of each superpixel, which includes a background-based ranking and a foreground-based ranking. Finally, we employ a mean-field-based propagation method to refine the saliency map iteratively and achieve smoother results. To evaluate the performance of our approach, we compare our work with multiple advanced methods in four datasets quantitatively and qualitatively.
ISSN:2169-3536