PCCN-MSS: Parallel Convolutional Classification Network Combined Multi-Spatial Scale and Spectral Features for UAV-Borne Hyperspectral With High Spatial Resolution Imagery

Hyperspectral remote sensing images with high spatial resolution (H<sup>2</sup> imagery) have an abundant spatial-spectral information, holding tremendous potential for remote sensing fine-grained monitoring and classification. However, challenges such as high spatial heterogeneity, seve...

Full description

Bibliographic Details
Main Authors: Linhuan Jiang, Zhen Zhang, Bo-Hui Tang, Lehao Huang, Bingru Zhang
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10448531/
Description
Summary:Hyperspectral remote sensing images with high spatial resolution (H<sup>2</sup> imagery) have an abundant spatial-spectral information, holding tremendous potential for remote sensing fine-grained monitoring and classification. However, challenges such as high spatial heterogeneity, severe intra-class spectral variability, and poor signal-to-noise ratio especially in unmanned aerial vehicle (UAV) hyperspectral imagery constrain and hinder the performance of fine-grained classification. Convolutional neural network (CNN) emerges as a formidable and excellent tool for image mining and feature extraction, offering effective utility for land cover classification. In this article, a parallel convolutional classification network model based on multimodal filters [including independent component analysis (ICA)-two-dimensional (2-D)-FPN and spectral attention (SA)-3-D-CNN branching structures] PCCN-MSS is proposed for precise H<sup>2</sup> imagery classification. The ICA-2-D-FPN branch integrates ICA into 2-D-CNN to extract the multispatial scale and spectral information of H<sup>2</sup> imagery by feature pyramid networks, meanwhile, the SA-3-D-CNN branch is designed to extract the spatial and spectral information by combining SA mechanism and 3-D-CNN. Taking hyperspectral imagery of UAVs containing vegetation and artifactual material ground as an example, the proposed PCCN-MSS model achieves an overall accuracy of 78.18&#x0025;, which outperforms by 9.58&#x0025; to the compared methods. The proposed PCCN-MSS method can mitigate the classification issues of severe salt-and-pepper noise and inaccurate boundary, delivering more satisfactory classification results with robust classification performance and remarkable advantages for H<sup>2</sup> imagery.
ISSN:2151-1535