N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs
We propose a novel 6D pose estimation approach tailored for auto-landing fixed-wing unmanned aerial vehicles (UAVs). This method facilitates the simultaneous tracking of both position and attitude using a ground-based vision system, regardless of the number of cameras (N-cameras), even in Global Nav...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-11-01
|
Series: | Drones |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-446X/7/12/693 |
_version_ | 1797381378947416064 |
---|---|
author | Dengqing Tang Lincheng Shen Xiaojia Xiang Han Zhou Jun Lai |
author_facet | Dengqing Tang Lincheng Shen Xiaojia Xiang Han Zhou Jun Lai |
author_sort | Dengqing Tang |
collection | DOAJ |
description | We propose a novel 6D pose estimation approach tailored for auto-landing fixed-wing unmanned aerial vehicles (UAVs). This method facilitates the simultaneous tracking of both position and attitude using a ground-based vision system, regardless of the number of cameras (N-cameras), even in Global Navigation Satellite System-denied environments. Our approach proposes a pipeline consisting of a Convolutional Neural Network (CNN)-based detection of UAV anchors which, in turn, drives the estimation of UAV pose. In order to ensure robust and precise anchor detection, we designed a Block-CNN architecture to mitigate the influence of outliers. Leveraging the information from these anchors, we established an Extended Kalman Filter to continuously update the UAV’s position and attitude. To support our research, we set up both monocular and stereo outdoor ground view systems for data collection and experimentation. Additionally, to expand our training dataset without requiring extra outdoor experiments, we created a parallel system that combines outdoor and simulated setups with identical configurations. We conducted a series of simulated and outdoor experiments. The results show that, compared with the baselines, our method achieves 3.0% anchor detection precision improvement and 19.5% and 12.7% accuracy improvement of position and attitude estimation. Furthermore, these experiments affirm the practicality of our proposed architecture and algorithm, meeting the stringent requirements for accuracy and real-time capability in the context of auto-landing fixed-wing UAVs. |
first_indexed | 2024-03-08T20:50:37Z |
format | Article |
id | doaj.art-404a42e68f024c44a48b3a49fd3f0f48 |
institution | Directory Open Access Journal |
issn | 2504-446X |
language | English |
last_indexed | 2024-03-08T20:50:37Z |
publishDate | 2023-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Drones |
spelling | doaj.art-404a42e68f024c44a48b3a49fd3f0f482023-12-22T14:03:57ZengMDPI AGDrones2504-446X2023-11-0171269310.3390/drones7120693N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVsDengqing Tang0Lincheng Shen1Xiaojia Xiang2Han Zhou3Jun Lai4The College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaThe College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaThe College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaThe College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaThe College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, ChinaWe propose a novel 6D pose estimation approach tailored for auto-landing fixed-wing unmanned aerial vehicles (UAVs). This method facilitates the simultaneous tracking of both position and attitude using a ground-based vision system, regardless of the number of cameras (N-cameras), even in Global Navigation Satellite System-denied environments. Our approach proposes a pipeline consisting of a Convolutional Neural Network (CNN)-based detection of UAV anchors which, in turn, drives the estimation of UAV pose. In order to ensure robust and precise anchor detection, we designed a Block-CNN architecture to mitigate the influence of outliers. Leveraging the information from these anchors, we established an Extended Kalman Filter to continuously update the UAV’s position and attitude. To support our research, we set up both monocular and stereo outdoor ground view systems for data collection and experimentation. Additionally, to expand our training dataset without requiring extra outdoor experiments, we created a parallel system that combines outdoor and simulated setups with identical configurations. We conducted a series of simulated and outdoor experiments. The results show that, compared with the baselines, our method achieves 3.0% anchor detection precision improvement and 19.5% and 12.7% accuracy improvement of position and attitude estimation. Furthermore, these experiments affirm the practicality of our proposed architecture and algorithm, meeting the stringent requirements for accuracy and real-time capability in the context of auto-landing fixed-wing UAVs.https://www.mdpi.com/2504-446X/7/12/693pose estimationauto-landing fixed-wing UAVsground vision systemblock convolutional neural networks |
spellingShingle | Dengqing Tang Lincheng Shen Xiaojia Xiang Han Zhou Jun Lai N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs Drones pose estimation auto-landing fixed-wing UAVs ground vision system block convolutional neural networks |
title | N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs |
title_full | N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs |
title_fullStr | N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs |
title_full_unstemmed | N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs |
title_short | N-Cameras-Enabled Joint Pose Estimation for Auto-Landing Fixed-Wing UAVs |
title_sort | n cameras enabled joint pose estimation for auto landing fixed wing uavs |
topic | pose estimation auto-landing fixed-wing UAVs ground vision system block convolutional neural networks |
url | https://www.mdpi.com/2504-446X/7/12/693 |
work_keys_str_mv | AT dengqingtang ncamerasenabledjointposeestimationforautolandingfixedwinguavs AT linchengshen ncamerasenabledjointposeestimationforautolandingfixedwinguavs AT xiaojiaxiang ncamerasenabledjointposeestimationforautolandingfixedwinguavs AT hanzhou ncamerasenabledjointposeestimationforautolandingfixedwinguavs AT junlai ncamerasenabledjointposeestimationforautolandingfixedwinguavs |