Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition
Usually radar target recognition methods only use a single type of high-resolution radar signal, e.g., high-resolution range profile (HRRP) or synthetic aperture radar (SAR) images. In fact, in the SAR imaging procedure, we can simultaneously obtain both the HRRP data and the corresponding SAR image...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-10-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/13/20/4021 |
_version_ | 1797513303816142848 |
---|---|
author | Lan Du Lu Li Yuchen Guo Yan Wang Ke Ren Jian Chen |
author_facet | Lan Du Lu Li Yuchen Guo Yan Wang Ke Ren Jian Chen |
author_sort | Lan Du |
collection | DOAJ |
description | Usually radar target recognition methods only use a single type of high-resolution radar signal, e.g., high-resolution range profile (HRRP) or synthetic aperture radar (SAR) images. In fact, in the SAR imaging procedure, we can simultaneously obtain both the HRRP data and the corresponding SAR image, as the information contained within them is not exactly the same. Although the information contained in the HRRP data and the SAR image are not exactly the same, both are important for radar target recognition. Therefore, in this paper, we propose a novel end-to-end two stream fusion network to make full use of the different characteristics obtained from modeling HRRP data and SAR images, respectively, for SAR target recognition. The proposed fusion network contains two separated streams in the feature extraction stage, one of which takes advantage of a variational auto-encoder (VAE) network to acquire the latent probabilistic distribution characteristic from the HRRP data, and the other uses a lightweight convolutional neural network, LightNet, to extract the 2D visual structure characteristics based on SAR images. Following the feature extraction stage, a fusion module is utilized to integrate the latent probabilistic distribution characteristic and the structure characteristic for the reflecting target information more comprehensively and sufficiently. The main contribution of the proposed method consists of two parts: (1) different characteristics from the HRRP data and the SAR image can be used effectively for SAR target recognition, and (2) an attention weight vector is used in the fusion module to adaptively integrate the different characteristics from the two sub-networks. The experimental results of our method on the HRRP data and SAR images of the MSTAR and civilian vehicle datasets obtained improvements of at least 0.96 and 2.16%, respectively, on recognition rates, compared with current SAR target recognition methods. |
first_indexed | 2024-03-10T06:14:43Z |
format | Article |
id | doaj.art-d832a7abf9664a7abe24ac1c7ee78850 |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-03-10T06:14:43Z |
publishDate | 2021-10-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-d832a7abf9664a7abe24ac1c7ee788502023-11-22T19:52:56ZengMDPI AGRemote Sensing2072-42922021-10-011320402110.3390/rs13204021Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target RecognitionLan Du0Lu Li1Yuchen Guo2Yan Wang3Ke Ren4Jian Chen5National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, ChinaNational Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, ChinaNational Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, ChinaNational Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, ChinaNational Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, ChinaNational Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, ChinaUsually radar target recognition methods only use a single type of high-resolution radar signal, e.g., high-resolution range profile (HRRP) or synthetic aperture radar (SAR) images. In fact, in the SAR imaging procedure, we can simultaneously obtain both the HRRP data and the corresponding SAR image, as the information contained within them is not exactly the same. Although the information contained in the HRRP data and the SAR image are not exactly the same, both are important for radar target recognition. Therefore, in this paper, we propose a novel end-to-end two stream fusion network to make full use of the different characteristics obtained from modeling HRRP data and SAR images, respectively, for SAR target recognition. The proposed fusion network contains two separated streams in the feature extraction stage, one of which takes advantage of a variational auto-encoder (VAE) network to acquire the latent probabilistic distribution characteristic from the HRRP data, and the other uses a lightweight convolutional neural network, LightNet, to extract the 2D visual structure characteristics based on SAR images. Following the feature extraction stage, a fusion module is utilized to integrate the latent probabilistic distribution characteristic and the structure characteristic for the reflecting target information more comprehensively and sufficiently. The main contribution of the proposed method consists of two parts: (1) different characteristics from the HRRP data and the SAR image can be used effectively for SAR target recognition, and (2) an attention weight vector is used in the fusion module to adaptively integrate the different characteristics from the two sub-networks. The experimental results of our method on the HRRP data and SAR images of the MSTAR and civilian vehicle datasets obtained improvements of at least 0.96 and 2.16%, respectively, on recognition rates, compared with current SAR target recognition methods.https://www.mdpi.com/2072-4292/13/20/4021target recognitionsynthetic aperture radarhigh-resolution range profilefusion networkvariational auto-encoderconvolutional neural network |
spellingShingle | Lan Du Lu Li Yuchen Guo Yan Wang Ke Ren Jian Chen Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition Remote Sensing target recognition synthetic aperture radar high-resolution range profile fusion network variational auto-encoder convolutional neural network |
title | Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition |
title_full | Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition |
title_fullStr | Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition |
title_full_unstemmed | Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition |
title_short | Two-Stream Deep Fusion Network Based on VAE and CNN for Synthetic Aperture Radar Target Recognition |
title_sort | two stream deep fusion network based on vae and cnn for synthetic aperture radar target recognition |
topic | target recognition synthetic aperture radar high-resolution range profile fusion network variational auto-encoder convolutional neural network |
url | https://www.mdpi.com/2072-4292/13/20/4021 |
work_keys_str_mv | AT landu twostreamdeepfusionnetworkbasedonvaeandcnnforsyntheticapertureradartargetrecognition AT luli twostreamdeepfusionnetworkbasedonvaeandcnnforsyntheticapertureradartargetrecognition AT yuchenguo twostreamdeepfusionnetworkbasedonvaeandcnnforsyntheticapertureradartargetrecognition AT yanwang twostreamdeepfusionnetworkbasedonvaeandcnnforsyntheticapertureradartargetrecognition AT keren twostreamdeepfusionnetworkbasedonvaeandcnnforsyntheticapertureradartargetrecognition AT jianchen twostreamdeepfusionnetworkbasedonvaeandcnnforsyntheticapertureradartargetrecognition |