Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model

Multi-modal sensing often involves determining correspondences between each domain's signals, which in turn depends on the accurate extrinsic calibration of the sensors. Challengingly, the camera-LIDAR sensor modalities are quite dissimilar and the narrow field of view of most commercial LIDARs...

Mô tả đầy đủ

Chi tiết về thư mục
Những tác giả chính: Fu, LFT, Chebrolu, N, Fallon, M
Định dạng: Conference item
Ngôn ngữ:English
Được phát hành: IEEE 2023
_version_ 1826312485282512896
author Fu, LFT
Chebrolu, N
Fallon, M
author_facet Fu, LFT
Chebrolu, N
Fallon, M
author_sort Fu, LFT
collection OXFORD
description Multi-modal sensing often involves determining correspondences between each domain's signals, which in turn depends on the accurate extrinsic calibration of the sensors. Challengingly, the camera-LIDAR sensor modalities are quite dissimilar and the narrow field of view of most commercial LIDARs means that they observe only a partial view of the camera frustum. We present a framework for extrinsic calibration of a camera and a LIDAR using only a simple off-the-shelf checkerboard. It is designed to operate even when the LIDAR observes a significantly truncated portion of the checkerboard. Current state-of-the-art methods often require bespoke manufactured markers or full observation of the entire checkerboard in both camera and LIDAR data which is prohibitive. By contrast, our novel algorithm directly aligns the LIDAR intensity pattern to the camera-detected checkerboard pattern using our differentiable formulation. The key step for achieving accurate extrinsics estimation is the use of the spatial derivatives provided by the differentiable checkerboard pattern, and jointly optimizing over all views. In our experiments, we achieve calibration accuracy in the order of 2-4 mm and demonstrate a 30% error reduction compared to state-of-the-art approaches. We are able to achieve this improvement while using only partial LIDAR views of the checkerboard that allows for a simpler data capture process. We also demonstrate the generalizability of our approach to different combinations of LIDARs and cameras with varying sparsity patterns and noise levels.
first_indexed 2024-04-09T03:55:17Z
format Conference item
id oxford-uuid:c2bbbe3f-d77c-4908-89e8-8e2f85c60ea3
institution University of Oxford
language English
last_indexed 2024-04-09T03:55:17Z
publishDate 2023
publisher IEEE
record_format dspace
spelling oxford-uuid:c2bbbe3f-d77c-4908-89e8-8e2f85c60ea32024-03-08T15:02:45ZExtrinsic calibration of camera to LIDAR using a differentiable checkerboard modelConference itemhttp://purl.org/coar/resource_type/c_5794uuid:c2bbbe3f-d77c-4908-89e8-8e2f85c60ea3EnglishSymplectic ElementsIEEE2023Fu, LFTChebrolu, NFallon, MMulti-modal sensing often involves determining correspondences between each domain's signals, which in turn depends on the accurate extrinsic calibration of the sensors. Challengingly, the camera-LIDAR sensor modalities are quite dissimilar and the narrow field of view of most commercial LIDARs means that they observe only a partial view of the camera frustum. We present a framework for extrinsic calibration of a camera and a LIDAR using only a simple off-the-shelf checkerboard. It is designed to operate even when the LIDAR observes a significantly truncated portion of the checkerboard. Current state-of-the-art methods often require bespoke manufactured markers or full observation of the entire checkerboard in both camera and LIDAR data which is prohibitive. By contrast, our novel algorithm directly aligns the LIDAR intensity pattern to the camera-detected checkerboard pattern using our differentiable formulation. The key step for achieving accurate extrinsics estimation is the use of the spatial derivatives provided by the differentiable checkerboard pattern, and jointly optimizing over all views. In our experiments, we achieve calibration accuracy in the order of 2-4 mm and demonstrate a 30% error reduction compared to state-of-the-art approaches. We are able to achieve this improvement while using only partial LIDAR views of the checkerboard that allows for a simpler data capture process. We also demonstrate the generalizability of our approach to different combinations of LIDARs and cameras with varying sparsity patterns and noise levels.
spellingShingle Fu, LFT
Chebrolu, N
Fallon, M
Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model
title Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model
title_full Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model
title_fullStr Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model
title_full_unstemmed Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model
title_short Extrinsic calibration of camera to LIDAR using a differentiable checkerboard model
title_sort extrinsic calibration of camera to lidar using a differentiable checkerboard model
work_keys_str_mv AT fulft extrinsiccalibrationofcameratolidarusingadifferentiablecheckerboardmodel
AT chebrolun extrinsiccalibrationofcameratolidarusingadifferentiablecheckerboardmodel
AT fallonm extrinsiccalibrationofcameratolidarusingadifferentiablecheckerboardmodel