LiDAR–camera fusion for road detection using a recurrent conditional random field model

Abstract Reliable road detection is an essential task in autonomous driving systems. Two categories of sensors are commonly used, cameras and light detection and ranging (LiDAR), each of which can provide corresponding supplements. Nevertheless, existing sensor fusion methods do not fully utilize mu...

Full description

Bibliographic Details
Main Authors: Lele Wang, Yingping Huang
Format: Article
Language:English
Published: Nature Portfolio 2022-07-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-022-14438-w
_version_ 1811323683502817280
author Lele Wang
Yingping Huang
author_facet Lele Wang
Yingping Huang
author_sort Lele Wang
collection DOAJ
description Abstract Reliable road detection is an essential task in autonomous driving systems. Two categories of sensors are commonly used, cameras and light detection and ranging (LiDAR), each of which can provide corresponding supplements. Nevertheless, existing sensor fusion methods do not fully utilize multimodal data. Most of them are dominated by images and take point clouds as a supplement rather than making the best of them, and the correlation between modalities is ignored. This paper proposes a recurrent conditional random field (R-CRF) model to fuse images and point clouds for road detection. The R-CRF model integrates results (information) from modalities in a probabilistic way. Each modality is independently processed with its semantic segmentation network. The probability scores obtained are considered a unary term for individual pixel nodes in a random field, while RGB images and the densified LiDAR images are used as pairwise terms. The energy function is then iteratively optimized by mean-field variational inference, and the labelling results are refined by exploiting fully connected graphs of the RGB image and LiDAR images. Extensive experiments are conducted on the public KITTI-Road dataset, and the proposed method achieves competitive performance.
first_indexed 2024-04-13T13:59:31Z
format Article
id doaj.art-34d9d8e07a6448c69ce2d85438dd28ad
institution Directory Open Access Journal
issn 2045-2322
language English
last_indexed 2024-04-13T13:59:31Z
publishDate 2022-07-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj.art-34d9d8e07a6448c69ce2d85438dd28ad2022-12-22T02:44:04ZengNature PortfolioScientific Reports2045-23222022-07-0112111410.1038/s41598-022-14438-wLiDAR–camera fusion for road detection using a recurrent conditional random field modelLele Wang0Yingping Huang1School of Optical-Electrical and Computer Engineering, University of Shanghai for Science & TechnologySchool of Optical-Electrical and Computer Engineering, University of Shanghai for Science & TechnologyAbstract Reliable road detection is an essential task in autonomous driving systems. Two categories of sensors are commonly used, cameras and light detection and ranging (LiDAR), each of which can provide corresponding supplements. Nevertheless, existing sensor fusion methods do not fully utilize multimodal data. Most of them are dominated by images and take point clouds as a supplement rather than making the best of them, and the correlation between modalities is ignored. This paper proposes a recurrent conditional random field (R-CRF) model to fuse images and point clouds for road detection. The R-CRF model integrates results (information) from modalities in a probabilistic way. Each modality is independently processed with its semantic segmentation network. The probability scores obtained are considered a unary term for individual pixel nodes in a random field, while RGB images and the densified LiDAR images are used as pairwise terms. The energy function is then iteratively optimized by mean-field variational inference, and the labelling results are refined by exploiting fully connected graphs of the RGB image and LiDAR images. Extensive experiments are conducted on the public KITTI-Road dataset, and the proposed method achieves competitive performance.https://doi.org/10.1038/s41598-022-14438-w
spellingShingle Lele Wang
Yingping Huang
LiDAR–camera fusion for road detection using a recurrent conditional random field model
Scientific Reports
title LiDAR–camera fusion for road detection using a recurrent conditional random field model
title_full LiDAR–camera fusion for road detection using a recurrent conditional random field model
title_fullStr LiDAR–camera fusion for road detection using a recurrent conditional random field model
title_full_unstemmed LiDAR–camera fusion for road detection using a recurrent conditional random field model
title_short LiDAR–camera fusion for road detection using a recurrent conditional random field model
title_sort lidar camera fusion for road detection using a recurrent conditional random field model
url https://doi.org/10.1038/s41598-022-14438-w
work_keys_str_mv AT lelewang lidarcamerafusionforroaddetectionusingarecurrentconditionalrandomfieldmodel
AT yingpinghuang lidarcamerafusionforroaddetectionusingarecurrentconditionalrandomfieldmodel