Uncertainty-Guided Segmentation Network for Geospatial Object Segmentation

Geospatial objects pose significant challenges, including dense distribution, substantial interclass variations, and minimal intraclass variations. These complexities make achieving precise foreground object segmentation in high-resolution remote sensing images highly challenging. Current segmentati...

Full description

Bibliographic Details
Main Authors: Hongyu Jia, Wenwu Yang, Lin Wang, Haolin Li
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10418974/
Description
Summary:Geospatial objects pose significant challenges, including dense distribution, substantial interclass variations, and minimal intraclass variations. These complexities make achieving precise foreground object segmentation in high-resolution remote sensing images highly challenging. Current segmentation approaches often rely on the standard encoder–decoder architecture to extract object-related information, but overlook the inherent uncertainty issues that arise during the process. In this article, we aim to enhance segmentation by introducing an uncertainty-guided decoding mechanism and propose the uncertainty-guided segmentation network (UGSNet). Specifically, building upon the conventional encoder–decoder architecture, we initially employ the pyramid vision transformer to extract multilevel features containing extensive long-range information. We then introduce an uncertainty-guided decoding mechanism, addressing both epistemic and aleatoric uncertainties, to progressively refine segmentation with higher certainty at each level. With this uncertainty-guided decoding mechanism, our UGSNet achieves accurate geospatial object segmentation. To validate the effectiveness of UGSNet, we conduct extensive experiments on the large-scale ISAID dataset, and the results unequivocally demonstrate the superiority of our method over other state-of-the-art segmentation methods.
ISSN:2151-1535