3D semantic map construction based on point cloud and image fusion
Abstract Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning....
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-03-01
|
Series: | IET Cyber-systems and Robotics |
Subjects: | |
Online Access: | https://doi.org/10.1049/csy2.12078 |
_version_ | 1827978914858795008 |
---|---|
author | Huijun Li Hailong Zhao Bin Ye Yu Zhang |
author_facet | Huijun Li Hailong Zhao Bin Ye Yu Zhang |
author_sort | Huijun Li |
collection | DOAJ |
description | Abstract Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox‐AVIA solid‐state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre‐integrated pose, using a sliding window to fuse solid‐state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well. |
first_indexed | 2024-04-09T21:28:53Z |
format | Article |
id | doaj.art-20c96552313b4774b3f446515197992b |
institution | Directory Open Access Journal |
issn | 2631-6315 |
language | English |
last_indexed | 2024-04-09T21:28:53Z |
publishDate | 2023-03-01 |
publisher | Wiley |
record_format | Article |
series | IET Cyber-systems and Robotics |
spelling | doaj.art-20c96552313b4774b3f446515197992b2023-03-27T11:38:33ZengWileyIET Cyber-systems and Robotics2631-63152023-03-0151n/an/a10.1049/csy2.120783D semantic map construction based on point cloud and image fusionHuijun Li0Hailong Zhao1Bin Ye2Yu Zhang3School of Information and Control Engineering China University of Mining and Technology Xuzhou ChinaSchool of Information and Control Engineering China University of Mining and Technology Xuzhou ChinaSchool of Information and Control Engineering China University of Mining and Technology Xuzhou ChinaSchool of Information and Control Engineering China University of Mining and Technology Xuzhou ChinaAbstract Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox‐AVIA solid‐state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre‐integrated pose, using a sliding window to fuse solid‐state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.https://doi.org/10.1049/csy2.12078artificial intelligencedeep learningenvironment perceptionsensor fusion |
spellingShingle | Huijun Li Hailong Zhao Bin Ye Yu Zhang 3D semantic map construction based on point cloud and image fusion IET Cyber-systems and Robotics artificial intelligence deep learning environment perception sensor fusion |
title | 3D semantic map construction based on point cloud and image fusion |
title_full | 3D semantic map construction based on point cloud and image fusion |
title_fullStr | 3D semantic map construction based on point cloud and image fusion |
title_full_unstemmed | 3D semantic map construction based on point cloud and image fusion |
title_short | 3D semantic map construction based on point cloud and image fusion |
title_sort | 3d semantic map construction based on point cloud and image fusion |
topic | artificial intelligence deep learning environment perception sensor fusion |
url | https://doi.org/10.1049/csy2.12078 |
work_keys_str_mv | AT huijunli 3dsemanticmapconstructionbasedonpointcloudandimagefusion AT hailongzhao 3dsemanticmapconstructionbasedonpointcloudandimagefusion AT binye 3dsemanticmapconstructionbasedonpointcloudandimagefusion AT yuzhang 3dsemanticmapconstructionbasedonpointcloudandimagefusion |