DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation

Abstract The sustainable development of marine fisheries depends on the accurate measurement of data on fish stocks. Semantic segmentation methods based on deep learning can be applied to automatically obtain segmentation masks of fish in images to obtain measurement data. However, general semantic...

Full description

Bibliographic Details
Main Authors: Wenbo Zhang, Chaoyi Wu, Zhenshan Bao
Format: Article
Language:English
Published: Wiley 2022-02-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/cvi2.12065
_version_ 1798041267153666048
author Wenbo Zhang
Chaoyi Wu
Zhenshan Bao
author_facet Wenbo Zhang
Chaoyi Wu
Zhenshan Bao
author_sort Wenbo Zhang
collection DOAJ
description Abstract The sustainable development of marine fisheries depends on the accurate measurement of data on fish stocks. Semantic segmentation methods based on deep learning can be applied to automatically obtain segmentation masks of fish in images to obtain measurement data. However, general semantic segmentation methods cannot accurately segment fish objects in underwater images. In this study, a Dual Pooling‐aggregated Attention Network (DPANet) to adaptively capture long‐range dependencies through an efficient and computing‐friendly manner to enhance feature representation and improve segmentation performance is proposed. Specifically, a novel pooling‐aggregate position attention module and a pooling‐aggregate channel attention module are designed to aggregate contexts in the spatial dimension and channel dimension, respectively. These two modules adopt pooling operations along the channel dimension and along the spatial dimension to aggregate information, respectively, thus reducing computational costs. In these modules, attention maps are generated by four different paths and are aggregated into one. The authors conduct extensive experiments to validate the effectiveness of the DPANet and achieve new state‐of‐the‐art segmentation performance on the well‐known fish image dataset DeepFish as well as on the underwater image dataset SUIM, achieving a Mean IoU score of 91.08% and 85.39% respectively, while significantly reducing FLOPs of attention modules by about 93%.
first_indexed 2024-04-11T22:19:05Z
format Article
id doaj.art-5cb3468a537740afae065b1289c24db9
institution Directory Open Access Journal
issn 1751-9632
1751-9640
language English
last_indexed 2024-04-11T22:19:05Z
publishDate 2022-02-01
publisher Wiley
record_format Article
series IET Computer Vision
spelling doaj.art-5cb3468a537740afae065b1289c24db92022-12-22T04:00:14ZengWileyIET Computer Vision1751-96321751-96402022-02-01161678210.1049/cvi2.12065DPANet: Dual Pooling‐aggregated Attention Network for fish segmentationWenbo Zhang0Chaoyi Wu1Zhenshan Bao2The Faculty of Information Technology Beijing University of Technology Beijing ChinaThe Faculty of Information Technology Beijing University of Technology Beijing ChinaThe Faculty of Information Technology Beijing University of Technology Beijing ChinaAbstract The sustainable development of marine fisheries depends on the accurate measurement of data on fish stocks. Semantic segmentation methods based on deep learning can be applied to automatically obtain segmentation masks of fish in images to obtain measurement data. However, general semantic segmentation methods cannot accurately segment fish objects in underwater images. In this study, a Dual Pooling‐aggregated Attention Network (DPANet) to adaptively capture long‐range dependencies through an efficient and computing‐friendly manner to enhance feature representation and improve segmentation performance is proposed. Specifically, a novel pooling‐aggregate position attention module and a pooling‐aggregate channel attention module are designed to aggregate contexts in the spatial dimension and channel dimension, respectively. These two modules adopt pooling operations along the channel dimension and along the spatial dimension to aggregate information, respectively, thus reducing computational costs. In these modules, attention maps are generated by four different paths and are aggregated into one. The authors conduct extensive experiments to validate the effectiveness of the DPANet and achieve new state‐of‐the‐art segmentation performance on the well‐known fish image dataset DeepFish as well as on the underwater image dataset SUIM, achieving a Mean IoU score of 91.08% and 85.39% respectively, while significantly reducing FLOPs of attention modules by about 93%.https://doi.org/10.1049/cvi2.12065learning (artificial intelligence)image segmentationaquaculture
spellingShingle Wenbo Zhang
Chaoyi Wu
Zhenshan Bao
DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation
IET Computer Vision
learning (artificial intelligence)
image segmentation
aquaculture
title DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation
title_full DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation
title_fullStr DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation
title_full_unstemmed DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation
title_short DPANet: Dual Pooling‐aggregated Attention Network for fish segmentation
title_sort dpanet dual pooling aggregated attention network for fish segmentation
topic learning (artificial intelligence)
image segmentation
aquaculture
url https://doi.org/10.1049/cvi2.12065
work_keys_str_mv AT wenbozhang dpanetdualpoolingaggregatedattentionnetworkforfishsegmentation
AT chaoyiwu dpanetdualpoolingaggregatedattentionnetworkforfishsegmentation
AT zhenshanbao dpanetdualpoolingaggregatedattentionnetworkforfishsegmentation