Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization
Object localization plays a crucial role in computational perception, enabling applications ranging from surveillance to autonomous navigation. This can be leveraged by fusing data from cameras and LiDARs (Light Detection and Ranging). However, there are challenges in employing current fusion method...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10182255/ |
_version_ | 1797772583100219392 |
---|---|
author | Jose Amendola Aveen Dayal Linga Reddy Cenkeramaddi Ajit Jha |
author_facet | Jose Amendola Aveen Dayal Linga Reddy Cenkeramaddi Ajit Jha |
author_sort | Jose Amendola |
collection | DOAJ |
description | Object localization plays a crucial role in computational perception, enabling applications ranging from surveillance to autonomous navigation. This can be leveraged by fusing data from cameras and LiDARs (Light Detection and Ranging). However, there are challenges in employing current fusion methods in edge devices, while keeping the process flexible and modular. This paper presents a method for multiple object localization that fuses LiDAR and camera data with low-latency, flexibility and scalability. Data is obtained from 360° surround view four cameras and a scanning LiDAR distributed over embedded devices. The proposed technique: 1) discriminates dynamic multiple objects in the scene from raw point clouds, clusters their respective points to obtain a compact representation in 3D space; and 2) asynchronously fuse the centroids with data from object detection neural networks for each camera for detection, localization, and tracking. The proposed method meets above functionalities with low-latency fusion and increased field of view for safer navigation, even with intermittent flow of labels and bounding boxes from models. That makes our system distributed, modular, scalable and agnostic to the object detection model, distinguishing it from the current state-of-art. Finally, the proposed method is implemented and validated in both indoor environment and publicly available outdoor KITTI 360 data set. The fusion occurs much faster and accurate when compared with traditional non-data driven fusion technique and the latency is competitive with other non-embedded deep learning fusion methods. The mean error is estimated to be ≈ 5 cm and precision of 2 cm for indoor navigation of 15 m (error percentage of 0.3 %). Similarly, mean error of 30 cm and precision of 3 cm for outdoor navigation of 35 m on KITTI 360 data set (error percentage of 0.8 %). |
first_indexed | 2024-03-12T21:53:00Z |
format | Article |
id | doaj.art-42e809400e02497eb41dee79f1413bfc |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-12T21:53:00Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-42e809400e02497eb41dee79f1413bfc2023-07-25T23:00:43ZengIEEEIEEE Access2169-35362023-01-0111735837359810.1109/ACCESS.2023.329521210182255Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object LocalizationJose Amendola0Aveen Dayal1https://orcid.org/0000-0001-6792-9170Linga Reddy Cenkeramaddi2https://orcid.org/0000-0002-1023-2118Ajit Jha3https://orcid.org/0000-0003-1435-9260Department of Engineering Sciences, University of Agder, Kristiansand, NorwayDepartment of Information and Communication Technology, University of Agder, Kristiansand, NorwayDepartment of Information and Communication Technology, University of Agder, Kristiansand, NorwayDepartment of Engineering Sciences, University of Agder, Kristiansand, NorwayObject localization plays a crucial role in computational perception, enabling applications ranging from surveillance to autonomous navigation. This can be leveraged by fusing data from cameras and LiDARs (Light Detection and Ranging). However, there are challenges in employing current fusion methods in edge devices, while keeping the process flexible and modular. This paper presents a method for multiple object localization that fuses LiDAR and camera data with low-latency, flexibility and scalability. Data is obtained from 360° surround view four cameras and a scanning LiDAR distributed over embedded devices. The proposed technique: 1) discriminates dynamic multiple objects in the scene from raw point clouds, clusters their respective points to obtain a compact representation in 3D space; and 2) asynchronously fuse the centroids with data from object detection neural networks for each camera for detection, localization, and tracking. The proposed method meets above functionalities with low-latency fusion and increased field of view for safer navigation, even with intermittent flow of labels and bounding boxes from models. That makes our system distributed, modular, scalable and agnostic to the object detection model, distinguishing it from the current state-of-art. Finally, the proposed method is implemented and validated in both indoor environment and publicly available outdoor KITTI 360 data set. The fusion occurs much faster and accurate when compared with traditional non-data driven fusion technique and the latency is competitive with other non-embedded deep learning fusion methods. The mean error is estimated to be ≈ 5 cm and precision of 2 cm for indoor navigation of 15 m (error percentage of 0.3 %). Similarly, mean error of 30 cm and precision of 3 cm for outdoor navigation of 35 m on KITTI 360 data set (error percentage of 0.8 %).https://ieeexplore.ieee.org/document/10182255/Sensor fusionLiDARobject localizationembedded system |
spellingShingle | Jose Amendola Aveen Dayal Linga Reddy Cenkeramaddi Ajit Jha Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization IEEE Access Sensor fusion LiDAR object localization embedded system |
title | Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization |
title_full | Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization |
title_fullStr | Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization |
title_full_unstemmed | Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization |
title_short | Edge-Distributed Fusion of Camera-LiDAR for Robust Moving Object Localization |
title_sort | edge distributed fusion of camera lidar for robust moving object localization |
topic | Sensor fusion LiDAR object localization embedded system |
url | https://ieeexplore.ieee.org/document/10182255/ |
work_keys_str_mv | AT joseamendola edgedistributedfusionofcameralidarforrobustmovingobjectlocalization AT aveendayal edgedistributedfusionofcameralidarforrobustmovingobjectlocalization AT lingareddycenkeramaddi edgedistributedfusionofcameralidarforrobustmovingobjectlocalization AT ajitjha edgedistributedfusionofcameralidarforrobustmovingobjectlocalization |