Summary: | Water body extraction is a typical task in the semantic segmentation of remote sensing images (RSIs). Deep convolutional neural networks (DCNNs) outperform traditional methods in mining visual features; however, due to the inherent convolutional mechanism of the network, spatial details and abstract semantic representations at different levels are difficult to capture accurately at the same time, and then the extraction results decline to become suboptimal, especially on narrow areas and boundaries. To address the above-mentioned problem, a multiscale successive attention fusion network, named MSAFNet, is proposed to efficiently aggregate the multiscale features from two aspects. A successive attention fusion module (SAFM) is first devised to extract multiscale and fine-grained features of water bodies, while a joint attention module (JAM) is proposed to further mine salient semantic information by jointly modeling contextual dependencies. Furthermore, the multi-level features extracted by the above-mentioned modules are aggregated by a feature fusion module (FFM) so that the edges of water bodies are well mapped, directly improving the segmentation of various water bodies. Extensive experiments were conducted on the Qinghai-Tibet Plateau Lake (QTPL) and the Land-cOVEr Domain Adaptive semantic segmentation (LoveDA) datasets. Numerically, MSAFNet reached the highest accuracy on both QTPL and LoveDA datasets, including Kappa, MIoU, FWIoU, F1, and OA, outperforming several mainstream methods. Regarding the QTPL dataset, MSAFNet peaked at 99.14% and 98.97% in terms of F1 and OA. Although the LoveDA dataset is more challenging, MSAFNet retained the best performance, with F1 and OA being 97.69% and 95.87%. Additionally, visual inspections exhibited consistency with numerical evaluations.
|