Beyond Pixel-Wise Unmixing: Spatial–Spectral Attention Fully Convolutional Networks for Abundance Estimation

Spectral unmixing poses a significant challenge within hyperspectral image processing, traditionally addressed by supervised convolutional neural network (CNN)-based approaches employing patch-to-pixel (pixel-wise) methods. However, such pixel-wise methodologies often necessitate image splitting int...

Full description

Bibliographic Details
Main Authors: Jiaxiang Huang, Puzhao Zhang
Format: Article
Language:English
Published: MDPI AG 2023-12-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/15/24/5694
Description
Summary:Spectral unmixing poses a significant challenge within hyperspectral image processing, traditionally addressed by supervised convolutional neural network (CNN)-based approaches employing patch-to-pixel (pixel-wise) methods. However, such pixel-wise methodologies often necessitate image splitting into overlapping patches, resulting in redundant computations and potential information leakage between training and test samples, consequently yielding overoptimistic outcomes. To overcome these challenges, this paper introduces a novel patch-to-patch (patch-wise) framework with nonoverlapping splitting, mitigating the need for repetitive calculations and preventing information leakage. The proposed framework incorporates a novel neural network structure inspired by the fully convolutional network (FCN), tailored for patch-wise unmixing. A highly efficient band reduction layer is incorporated to reduce the spectral dimension, and a specialized abundance constraint module is crafted to enforce both the Abundance Nonnegativity Constraint and the Abundance Sum-to-One Constraint for unmixing tasks. Furthermore, to enhance the performance of abundance estimation, a spatial–spectral attention module is introduced to activate the most informative spatial areas and feature maps. Extensive quantitative experiments and visual assessments conducted on two synthetic datasets and three real datasets substantiate the superior performance of the proposed algorithm. Significantly, the method achieves an impressive <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>R</mi><mi>M</mi><mi>S</mi><mi>E</mi></mrow></semantics></math></inline-formula> loss of 0.007, which is at least 4.5 times lower than that of other baselines on Urban hyperspectral images. This outcome demonstrates the effectiveness of our approach in addressing the challenges of spectral unmixing.
ISSN:2072-4292