Spatial-Spectral BERT for Hyperspectral Image Classification

Several deep learning and transformer models have been recommended in previous research to deal with the classification of hyperspectral images (HSIs). Among them, one of the most innovative is the bidirectional encoder representation from transformers (BERT), which applies a distance-independent ap...

Full description

Bibliographic Details
Main Authors: Mahmood Ashraf, Xichuan Zhou, Gemine Vivone, Lihui Chen, Rong Chen, Reza Seifi Majdard
Format: Article
Language:English
Published: MDPI AG 2024-01-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/16/3/539
Description
Summary:Several deep learning and transformer models have been recommended in previous research to deal with the classification of hyperspectral images (HSIs). Among them, one of the most innovative is the bidirectional encoder representation from transformers (BERT), which applies a distance-independent approach to capture the global dependency among all pixels in a selected region. However, this model does not consider the local spatial-spectral and spectral sequential relations. In this paper, a dual-dimensional (i.e., spatial and spectral) BERT (the so-called D<sup>2</sup>BERT) is proposed, which improves the existing BERT model by capturing more global and local dependencies between sequential spectral bands regardless of distance. In the proposed model, two BERT branches work in parallel to investigate relations among pixels and spectral bands, respectively. In addition, the layer intermediate information is used for supervision during the training phase to enhance the performance. We used two widely employed datasets for our experimental analysis. The proposed D<sup>2</sup>BERT shows superior classification accuracy and computational efficiency with respect to some state-of-the-art neural networks and the previously developed BERT model for this task.
ISSN:2072-4292