Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention

In the hyperspectral image (HSI) classification task, every HSI pixel is labeled as a specific land cover category. Although convolutional neural network (CNN)-based HSI classification methods have made significant progress in enhancing classification performance in recent years, they still have lim...

Full description

Bibliographic Details
Main Authors: Wen Lu, Xinyu Wang, Le Sun, Yuhui Zheng
Format: Article
Language:English
Published: MDPI AG 2023-12-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/16/1/67
_version_ 1827384451593666560
author Wen Lu
Xinyu Wang
Le Sun
Yuhui Zheng
author_facet Wen Lu
Xinyu Wang
Le Sun
Yuhui Zheng
author_sort Wen Lu
collection DOAJ
description In the hyperspectral image (HSI) classification task, every HSI pixel is labeled as a specific land cover category. Although convolutional neural network (CNN)-based HSI classification methods have made significant progress in enhancing classification performance in recent years, they still have limitations in acquiring deep semantic features and face the challenges of escalating computational costs with increasing network depth. In contrast, the Transformer framework excels in expressing high-level semantic features. This study introduces a novel classification network by extracting spectral–spatial features with an enhanced Transformer with Large-Kernel Attention (ETLKA). Specifically, it utilizes distinct branches of three-dimensional and two-dimensional convolutional layers to extract more diverse shallow spectral–spatial features. Additionally, a Large-Kernel Attention mechanism is incorporated and applied before the Transformer encoder to enhance feature extraction, augment comprehension of input data, reduce the impact of redundant information, and enhance the model’s robustness. Subsequently, the obtained features are input to the Transformer encoder module for feature representation and learning. Finally, a linear layer is employed to identify the first learnable token for sample label acquisition. Empirical validation confirms the outstanding classification performance of ETLKA, surpassing several advanced techniques currently in use. This research provides a robust and academically rigorous solution for HSI classification tasks, promising significant contributions in practical applications.
first_indexed 2024-03-08T14:57:58Z
format Article
id doaj.art-0d85db05365d4f8c965efea961e10ca1
institution Directory Open Access Journal
issn 2072-4292
language English
last_indexed 2024-03-08T14:57:58Z
publishDate 2023-12-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj.art-0d85db05365d4f8c965efea961e10ca12024-01-10T15:07:17ZengMDPI AGRemote Sensing2072-42922023-12-011616710.3390/rs16010067Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel AttentionWen Lu0Xinyu Wang1Le Sun2Yuhui Zheng3The College of Computer, Qinghai Normal University, Xining 810000, ChinaSchool of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, ChinaSchool of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, ChinaThe College of Computer, Qinghai Normal University, Xining 810000, ChinaIn the hyperspectral image (HSI) classification task, every HSI pixel is labeled as a specific land cover category. Although convolutional neural network (CNN)-based HSI classification methods have made significant progress in enhancing classification performance in recent years, they still have limitations in acquiring deep semantic features and face the challenges of escalating computational costs with increasing network depth. In contrast, the Transformer framework excels in expressing high-level semantic features. This study introduces a novel classification network by extracting spectral–spatial features with an enhanced Transformer with Large-Kernel Attention (ETLKA). Specifically, it utilizes distinct branches of three-dimensional and two-dimensional convolutional layers to extract more diverse shallow spectral–spatial features. Additionally, a Large-Kernel Attention mechanism is incorporated and applied before the Transformer encoder to enhance feature extraction, augment comprehension of input data, reduce the impact of redundant information, and enhance the model’s robustness. Subsequently, the obtained features are input to the Transformer encoder module for feature representation and learning. Finally, a linear layer is employed to identify the first learnable token for sample label acquisition. Empirical validation confirms the outstanding classification performance of ETLKA, surpassing several advanced techniques currently in use. This research provides a robust and academically rigorous solution for HSI classification tasks, promising significant contributions in practical applications.https://www.mdpi.com/2072-4292/16/1/67CNNTransformerspectral–spatial featureHSI
spellingShingle Wen Lu
Xinyu Wang
Le Sun
Yuhui Zheng
Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention
Remote Sensing
CNN
Transformer
spectral–spatial feature
HSI
title Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention
title_full Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention
title_fullStr Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention
title_full_unstemmed Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention
title_short Spectral–Spatial Feature Extraction for Hyperspectral Image Classification Using Enhanced Transformer with Large-Kernel Attention
title_sort spectral spatial feature extraction for hyperspectral image classification using enhanced transformer with large kernel attention
topic CNN
Transformer
spectral–spatial feature
HSI
url https://www.mdpi.com/2072-4292/16/1/67
work_keys_str_mv AT wenlu spectralspatialfeatureextractionforhyperspectralimageclassificationusingenhancedtransformerwithlargekernelattention
AT xinyuwang spectralspatialfeatureextractionforhyperspectralimageclassificationusingenhancedtransformerwithlargekernelattention
AT lesun spectralspatialfeatureextractionforhyperspectralimageclassificationusingenhancedtransformerwithlargekernelattention
AT yuhuizheng spectralspatialfeatureextractionforhyperspectralimageclassificationusingenhancedtransformerwithlargekernelattention