Densely Residual Network with Dual Attention for Hyperspectral Reconstruction from RGB Images

In the last several years, deep learning has been introduced to recover a hyperspectral image (HSI) from a single RGB image and demonstrated good performance. In particular, attention mechanisms have further strengthened discriminative features, but most of them are learned by convolutions with limi...

Full description

Bibliographic Details
Main Authors: Lixia Wang, Aditya Sole, Jon Yngve Hardeberg
Format: Article
Language:English
Published: MDPI AG 2022-06-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/14/13/3128
Description
Summary:In the last several years, deep learning has been introduced to recover a hyperspectral image (HSI) from a single RGB image and demonstrated good performance. In particular, attention mechanisms have further strengthened discriminative features, but most of them are learned by convolutions with limited receptive fields or require much computational cost, which hinders the function of attention modules. Furthermore, the performance of these deep learning methods is hampered by tackling multi-level features equally. To this end, in this paper, based on multiple lightweight densely residual modules, we propose a densely residual network with dual attention (DRN-DA), which utilizes advanced attention and adaptive fusion strategy for more efficient feature correlation learning and more powerful feature extraction. Specifically, an SE layer is applied to learn channel-wise dependencies, and dual downsampling spatial attention (DDSA) is developed to capture long-range spatial contextual information. All the intermediate-layer feature maps are adaptively fused. Experimental results on four data sets from the NTIRE 2018 and NTIRE 2020 Spectral Reconstruction Challenges demonstrate the superiority of the proposed DRN-DA over state-of-the-art methods (at least −6.19% and −1.43% on NTIRE 2018 “Clean” track and “Real World” track, −6.85% and −5.30% on NTIRE 2020 “Clean” track and “Real World” track) in terms of mean relative absolute error.
ISSN:2072-4292