Transformer-Based Maneuvering Target Tracking
When tracking maneuvering targets, recurrent neural networks (RNNs), especially long short-term memory (LSTM) networks, are widely applied to sequentially capture the motion states of targets from observations. However, LSTMs can only extract features of trajectories stepwise; thus, their modeling o...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-11-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/21/8482 |
_version_ | 1797466456270569472 |
---|---|
author | Guanghui Zhao Zelin Wang Yixiong Huang Huirong Zhang Xiaojing Ma |
author_facet | Guanghui Zhao Zelin Wang Yixiong Huang Huirong Zhang Xiaojing Ma |
author_sort | Guanghui Zhao |
collection | DOAJ |
description | When tracking maneuvering targets, recurrent neural networks (RNNs), especially long short-term memory (LSTM) networks, are widely applied to sequentially capture the motion states of targets from observations. However, LSTMs can only extract features of trajectories stepwise; thus, their modeling of maneuvering motion lacks globality. Meanwhile, trajectory datasets are often generated within a large, but fixed distance range. Therefore, the uncertainty of the initial position of targets increases the complexity of network training, and the fixed distance range reduces the generalization of the network to trajectories outside the dataset. In this study, we propose a transformer-based network (TBN) that consists of an encoder part (transformer layers) and a decoder part (one-dimensional convolutional layers), to track maneuvering targets. Assisted by the attention mechanism of the transformer network, the TBN can capture the long short-term dependencies of target states from a global perspective. Moreover, we propose a center–max normalization to reduce the complexity of TBN training and improve its generalization. The experimental results show that our proposed methods outperform the LSTM-based tracking network. |
first_indexed | 2024-03-09T18:40:08Z |
format | Article |
id | doaj.art-6feef1edbae142eaa7178ef68005f3ec |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T18:40:08Z |
publishDate | 2022-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-6feef1edbae142eaa7178ef68005f3ec2023-11-24T06:48:46ZengMDPI AGSensors1424-82202022-11-012221848210.3390/s22218482Transformer-Based Maneuvering Target TrackingGuanghui Zhao0Zelin Wang1Yixiong Huang2Huirong Zhang3Xiaojing Ma4School of Artificial Intelligence, Xidian University, Xi’an 710071, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an 710071, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an 710071, ChinaSchool of Artificial Intelligence, Xidian University, Xi’an 710071, ChinaSchool of Electronic Confrontation, National University of Defense, Hefei 230037, ChinaWhen tracking maneuvering targets, recurrent neural networks (RNNs), especially long short-term memory (LSTM) networks, are widely applied to sequentially capture the motion states of targets from observations. However, LSTMs can only extract features of trajectories stepwise; thus, their modeling of maneuvering motion lacks globality. Meanwhile, trajectory datasets are often generated within a large, but fixed distance range. Therefore, the uncertainty of the initial position of targets increases the complexity of network training, and the fixed distance range reduces the generalization of the network to trajectories outside the dataset. In this study, we propose a transformer-based network (TBN) that consists of an encoder part (transformer layers) and a decoder part (one-dimensional convolutional layers), to track maneuvering targets. Assisted by the attention mechanism of the transformer network, the TBN can capture the long short-term dependencies of target states from a global perspective. Moreover, we propose a center–max normalization to reduce the complexity of TBN training and improve its generalization. The experimental results show that our proposed methods outperform the LSTM-based tracking network.https://www.mdpi.com/1424-8220/22/21/8482attention mechanismmaneuvering target trackingrecurrent neural networktransformer-based network |
spellingShingle | Guanghui Zhao Zelin Wang Yixiong Huang Huirong Zhang Xiaojing Ma Transformer-Based Maneuvering Target Tracking Sensors attention mechanism maneuvering target tracking recurrent neural network transformer-based network |
title | Transformer-Based Maneuvering Target Tracking |
title_full | Transformer-Based Maneuvering Target Tracking |
title_fullStr | Transformer-Based Maneuvering Target Tracking |
title_full_unstemmed | Transformer-Based Maneuvering Target Tracking |
title_short | Transformer-Based Maneuvering Target Tracking |
title_sort | transformer based maneuvering target tracking |
topic | attention mechanism maneuvering target tracking recurrent neural network transformer-based network |
url | https://www.mdpi.com/1424-8220/22/21/8482 |
work_keys_str_mv | AT guanghuizhao transformerbasedmaneuveringtargettracking AT zelinwang transformerbasedmaneuveringtargettracking AT yixionghuang transformerbasedmaneuveringtargettracking AT huirongzhang transformerbasedmaneuveringtargettracking AT xiaojingma transformerbasedmaneuveringtargettracking |