Small-Scale Ship Detection for SAR Remote Sensing Images Based on Coordinate-Aware Mixed Attention and Spatial Semantic Joint Context

With the rapid development of deep learning technology in recent years, convolutional neural networks have gained remarkable progress in SAR ship detection tasks. However, noise interference of the background and inadequate appearance features of small-scale objects still pose challenges. To tackle...

Full description

Bibliographic Details
Main Authors: Zhengjie Jiang, Yupei Wang, Xiaoqi Zhou, Liang Chen, Yuan Chang, Dongsheng Song, Hao Shi
Format: Article
Language:English
Published: MDPI AG 2023-06-01
Series:Smart Cities
Subjects:
Online Access:https://www.mdpi.com/2624-6511/6/3/76
Description
Summary:With the rapid development of deep learning technology in recent years, convolutional neural networks have gained remarkable progress in SAR ship detection tasks. However, noise interference of the background and inadequate appearance features of small-scale objects still pose challenges. To tackle these issues, we propose a small ship detection algorithm for SAR images by means of a coordinate-aware mixed attention mechanism and spatial semantic joint context method. First, the coordinate-aware mixed attention mechanism innovatively combines coordinate-aware channel attention and spatial attention to achieve coordinate alignment of mixed attention features. In this way, attention with finer spatial granularity is conducive to strengthening the focusing ability on small-scale objects, thereby suppressing the background clutters accurately. In addition, the spatial semantic joint context method exploits the local and global environmental information jointly. The detailed spatial cues contained in the multi-scale local context and the generalized semantic information encoded in the global context are used to enhance the feature expression and distinctiveness of small-scale ship objects. Extensive experiments are conducted on the LS-SSDD-v1.0 and the HRSID dataset. The results with an average precision of 77.23% and 90.85% on the two datasets show the effectiveness of the proposed methods.
ISSN:2624-6511