Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation

Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To...

Full description

Bibliographic Details
Main Authors: Haileleol Tibebu, Varuna De-Silva, Corentin Artaud, Rafael Pina, Xiyu Shi
Format: Article
Language:English
Published: MDPI AG 2022-10-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/20/8021
_version_ 1797469950460297216
author Haileleol Tibebu
Varuna De-Silva
Corentin Artaud
Rafael Pina
Xiyu Shi
author_facet Haileleol Tibebu
Varuna De-Silva
Corentin Artaud
Rafael Pina
Xiyu Shi
author_sort Haileleol Tibebu
collection DOAJ
description Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network’s learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network’s learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.
first_indexed 2024-03-09T19:30:11Z
format Article
id doaj.art-e03d520ff7194e939e9dc4b696a7abb3
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-09T19:30:11Z
publishDate 2022-10-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-e03d520ff7194e939e9dc4b696a7abb32023-11-24T02:30:26ZengMDPI AGSensors1424-82202022-10-012220802110.3390/s22208021Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles LocalisationHaileleol Tibebu0Varuna De-Silva1Corentin Artaud2Rafael Pina3Xiyu Shi4Institute of Digital Technologies, Loughborough University London, 3 Lesney Avenue, London E20 3BS, UKInstitute of Digital Technologies, Loughborough University London, 3 Lesney Avenue, London E20 3BS, UKInstitute of Digital Technologies, Loughborough University London, 3 Lesney Avenue, London E20 3BS, UKInstitute of Digital Technologies, Loughborough University London, 3 Lesney Avenue, London E20 3BS, UKInstitute of Digital Technologies, Loughborough University London, 3 Lesney Avenue, London E20 3BS, UKRecent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network’s learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network’s learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly.https://www.mdpi.com/1424-8220/22/20/8021glass detectionoccupancy grid mappingLiDAR noise reductionlocalisation
spellingShingle Haileleol Tibebu
Varuna De-Silva
Corentin Artaud
Rafael Pina
Xiyu Shi
Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
Sensors
glass detection
occupancy grid mapping
LiDAR noise reduction
localisation
title Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_full Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_fullStr Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_full_unstemmed Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_short Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
title_sort towards interpretable camera and lidar data fusion for autonomous ground vehicles localisation
topic glass detection
occupancy grid mapping
LiDAR noise reduction
localisation
url https://www.mdpi.com/1424-8220/22/20/8021
work_keys_str_mv AT haileleoltibebu towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT varunadesilva towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT corentinartaud towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT rafaelpina towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation
AT xiyushi towardsinterpretablecameraandlidardatafusionforautonomousgroundvehicleslocalisation