HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification
Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specific...
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-07-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/15/14/3491 |
_version_ | 1797587580317859840 |
---|---|
author | Jiaxing Xie Jiajun Hua Shaonan Chen Peiwen Wu Peng Gao Daozong Sun Zhendong Lyu Shilei Lyu Xiuyun Xue Jianqiang Lu |
author_facet | Jiaxing Xie Jiajun Hua Shaonan Chen Peiwen Wu Peng Gao Daozong Sun Zhendong Lyu Shilei Lyu Xiuyun Xue Jianqiang Lu |
author_sort | Jiaxing Xie |
collection | DOAJ |
description | Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function. |
first_indexed | 2024-03-11T00:41:56Z |
format | Article |
id | doaj.art-4e4e21b77b804791aac3818a445bcebe |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-03-11T00:41:56Z |
publishDate | 2023-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-4e4e21b77b804791aac3818a445bcebe2023-11-18T21:11:34ZengMDPI AGRemote Sensing2072-42922023-07-011514349110.3390/rs15143491HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop ClassificationJiaxing Xie0Jiajun Hua1Shaonan Chen2Peiwen Wu3Peng Gao4Daozong Sun5Zhendong Lyu6Shilei Lyu7Xiuyun Xue8Jianqiang Lu9College of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCollege of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou 510642, ChinaCrop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function.https://www.mdpi.com/2072-4292/15/14/3491crop classificationhyperspectral image classificationdeep learningtransformersemantic segmentation |
spellingShingle | Jiaxing Xie Jiajun Hua Shaonan Chen Peiwen Wu Peng Gao Daozong Sun Zhendong Lyu Shilei Lyu Xiuyun Xue Jianqiang Lu HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification Remote Sensing crop classification hyperspectral image classification deep learning transformer semantic segmentation |
title | HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification |
title_full | HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification |
title_fullStr | HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification |
title_full_unstemmed | HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification |
title_short | HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification |
title_sort | hypersformer a transformer based end to end hyperspectral image classification method for crop classification |
topic | crop classification hyperspectral image classification deep learning transformer semantic segmentation |
url | https://www.mdpi.com/2072-4292/15/14/3491 |
work_keys_str_mv | AT jiaxingxie hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT jiajunhua hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT shaonanchen hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT peiwenwu hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT penggao hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT daozongsun hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT zhendonglyu hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT shileilyu hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT xiuyunxue hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification AT jianqianglu hypersformeratransformerbasedendtoendhyperspectralimageclassificationmethodforcropclassification |