Multi-Color Space Network for Salient Object Detection
The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features f...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-05-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/9/3588 |
_version_ | 1797502735469248512 |
---|---|
author | Kyungjun Lee Jechang Jeong |
author_facet | Kyungjun Lee Jechang Jeong |
author_sort | Kyungjun Lee |
collection | DOAJ |
description | The salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. First, the images were converted to HSV and grayscale color spaces to obtain saliency cues other than those provided by RGB color information. Each saliency cue was fed into two parallel VGG backbone networks to extract features. Contextual information was obtained from the extracted features using atrous spatial pyramid pooling (ASPP). The features obtained from both paths were passed through the attention module, and channel and spatial features were highlighted. Finally, the final saliency map was generated using a step-by-step residual refinement module (RRM). Furthermore, the network was trained with a bidirectional loss to supervise saliency detection results. Experiments on five public benchmark datasets showed that our proposed network achieved superior performance in terms of both subjective results and objective metrics. |
first_indexed | 2024-03-10T03:40:21Z |
format | Article |
id | doaj.art-b87747b91d624a788812693ba127a0bb |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-10T03:40:21Z |
publishDate | 2022-05-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-b87747b91d624a788812693ba127a0bb2023-11-23T09:20:40ZengMDPI AGSensors1424-82202022-05-01229358810.3390/s22093588Multi-Color Space Network for Salient Object DetectionKyungjun Lee0Jechang Jeong1Department of Electronics and Computer Engineering, Hanyang University, Seoul 04763, KoreaDepartment of Electronics and Computer Engineering, Hanyang University, Seoul 04763, KoreaThe salient object detection (SOD) technology predicts which object will attract the attention of an observer surveying a particular scene. Most state-of-the-art SOD methods are top-down mechanisms that apply fully convolutional networks (FCNs) of various structures to RGB images, extract features from them, and train a network. However, owing to the variety of factors that affect visual saliency, securing sufficient features from a single color space is difficult. Therefore, in this paper, we propose a multi-color space network (MCSNet) to detect salient objects using various saliency cues. First, the images were converted to HSV and grayscale color spaces to obtain saliency cues other than those provided by RGB color information. Each saliency cue was fed into two parallel VGG backbone networks to extract features. Contextual information was obtained from the extracted features using atrous spatial pyramid pooling (ASPP). The features obtained from both paths were passed through the attention module, and channel and spatial features were highlighted. Finally, the final saliency map was generated using a step-by-step residual refinement module (RRM). Furthermore, the network was trained with a bidirectional loss to supervise saliency detection results. Experiments on five public benchmark datasets showed that our proposed network achieved superior performance in terms of both subjective results and objective metrics.https://www.mdpi.com/1424-8220/22/9/3588salient object detectionmulti-color space learningfully convolutional networkatrous spatial pyramid pooling moduleattention module |
spellingShingle | Kyungjun Lee Jechang Jeong Multi-Color Space Network for Salient Object Detection Sensors salient object detection multi-color space learning fully convolutional network atrous spatial pyramid pooling module attention module |
title | Multi-Color Space Network for Salient Object Detection |
title_full | Multi-Color Space Network for Salient Object Detection |
title_fullStr | Multi-Color Space Network for Salient Object Detection |
title_full_unstemmed | Multi-Color Space Network for Salient Object Detection |
title_short | Multi-Color Space Network for Salient Object Detection |
title_sort | multi color space network for salient object detection |
topic | salient object detection multi-color space learning fully convolutional network atrous spatial pyramid pooling module attention module |
url | https://www.mdpi.com/1424-8220/22/9/3588 |
work_keys_str_mv | AT kyungjunlee multicolorspacenetworkforsalientobjectdetection AT jechangjeong multicolorspacenetworkforsalientobjectdetection |