A Multi-task Model to Detect Saliency and Edge using Hybrid Cost Function

Detection of salient objects is done with the aim of identifying and segmenting prominent objects or areas in an image. Fully Convolutional Networks (FCNs) have shown their advantages in salient object detection; however, many previous works have focused on the accuracy of the prominent area without...

Full description

Bibliographic Details
Main Authors: Sajjad Dehghan, Mohammad Javad Fadaeieslam
Format: Article
Language:fas
Published: Semnan University 2022-12-01
Series:مجله مدل سازی در مهندسی
Subjects:
Online Access:https://modelling.semnan.ac.ir/article_6978_212904b42f7ad2aaf466db15479729e5.pdf
Description
Summary:Detection of salient objects is done with the aim of identifying and segmenting prominent objects or areas in an image. Fully Convolutional Networks (FCNs) have shown their advantages in salient object detection; however, many previous works have focused on the accuracy of the prominent area without paying attention to its edge. This paper focuses on the complementarity between edge information and salient object one and added an edge recognition module to explicitly model edge information to maintain salient object boundaries. Our proposed network is trying to improve these two tasks simultaneously. The presence of objects with different scales in related datasets is another problem in this area. It requires an appropriate cost function to deal with the imbalance problem between background and foreground in images. So, we have used the hybrid cost function in the training phase, which is not sensitive to the scale of objects and can better manage the problem of spatial coherence and uniformly highlight salient areas without additional parameters. A Comparison of the quantitative and qualitative results obtained by the proposed method with other advanced methods in six widely used protrusion detection datasets shows that the proposed method has a good performance and can quickly identify prominent areas. In particular, according to the quantitative results, our method gets the best result on three widely used test datasets in terms of F-measure and MAE criteria, demonstrating the proposed method's efficiency.
ISSN:2008-4854
2783-2538