Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model

This study presents a methodology for the coarse alignment of light detection and ranging (LiDAR) point clouds, which involves estimating the position and orientation of each station using the pinhole camera model and a position/orientation estimation algorithm. Ground control points are obtained us...

Full description

Bibliographic Details
Main Authors: Suhong Yoo, Namhoon Kim
Format: Article
Language:English
Published: MDPI AG 2023-12-01
Series:Journal of Imaging
Subjects:
Online Access:https://www.mdpi.com/2313-433X/9/12/279
_version_ 1797380493746896896
author Suhong Yoo
Namhoon Kim
author_facet Suhong Yoo
Namhoon Kim
author_sort Suhong Yoo
collection DOAJ
description This study presents a methodology for the coarse alignment of light detection and ranging (LiDAR) point clouds, which involves estimating the position and orientation of each station using the pinhole camera model and a position/orientation estimation algorithm. Ground control points are obtained using LiDAR camera images and the point clouds are obtained from the reference station. The estimated position and orientation vectors are used for point cloud registration. To evaluate the accuracy of the results, the positions of the LiDAR and the target were measured using a total station, and a comparison was carried out with the results of semi-automatic registration. The proposed methodology yielded an estimated mean LiDAR position error of 0.072 m, which was similar to the semi-automatic registration value of 0.070 m. When the point clouds of each station were registered using the estimated values, the mean registration accuracy was 0.124 m, while the semi-automatic registration accuracy was 0.072 m. The high accuracy of semi-automatic registration is due to its capability for performing both coarse alignment and refined registration. The comparison between the point cloud with refined alignment using the proposed methodology and the point-to-point distance analysis revealed that the average distance was measured at 0.0117 m. Moreover, 99% of the points exhibited distances within the range of 0.0696 m.
first_indexed 2024-03-08T20:38:06Z
format Article
id doaj.art-9b25300bd23f46168793d909c96b375e
institution Directory Open Access Journal
issn 2313-433X
language English
last_indexed 2024-03-08T20:38:06Z
publishDate 2023-12-01
publisher MDPI AG
record_format Article
series Journal of Imaging
spelling doaj.art-9b25300bd23f46168793d909c96b375e2023-12-22T14:18:13ZengMDPI AGJournal of Imaging2313-433X2023-12-0191227910.3390/jimaging9120279Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation ModelSuhong Yoo0Namhoon Kim1Department of Drone and GIS Engineering, Namseoul University, 91, Daehak-ro, Seonghwan-eup, Seobuk-gu, Cheonan-si 31020, Republic of KoreaDepartment of Civil Engineering and Environmental Sciences, Korea Military Academy, 574, Hwarang-ro, Nowon-gu, Seoul 01805, Republic of KoreaThis study presents a methodology for the coarse alignment of light detection and ranging (LiDAR) point clouds, which involves estimating the position and orientation of each station using the pinhole camera model and a position/orientation estimation algorithm. Ground control points are obtained using LiDAR camera images and the point clouds are obtained from the reference station. The estimated position and orientation vectors are used for point cloud registration. To evaluate the accuracy of the results, the positions of the LiDAR and the target were measured using a total station, and a comparison was carried out with the results of semi-automatic registration. The proposed methodology yielded an estimated mean LiDAR position error of 0.072 m, which was similar to the semi-automatic registration value of 0.070 m. When the point clouds of each station were registered using the estimated values, the mean registration accuracy was 0.124 m, while the semi-automatic registration accuracy was 0.072 m. The high accuracy of semi-automatic registration is due to its capability for performing both coarse alignment and refined registration. The comparison between the point cloud with refined alignment using the proposed methodology and the point-to-point distance analysis revealed that the average distance was measured at 0.0117 m. Moreover, 99% of the points exhibited distances within the range of 0.0696 m.https://www.mdpi.com/2313-433X/9/12/279place recognitionpose estimationmappingsensor fusion for localizationLiDARcoarse alignment
spellingShingle Suhong Yoo
Namhoon Kim
Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
Journal of Imaging
place recognition
pose estimation
mapping
sensor fusion for localization
LiDAR
coarse alignment
title Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
title_full Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
title_fullStr Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
title_full_unstemmed Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
title_short Coarse Alignment Methodology of Point Cloud Based on Camera Position/Orientation Estimation Model
title_sort coarse alignment methodology of point cloud based on camera position orientation estimation model
topic place recognition
pose estimation
mapping
sensor fusion for localization
LiDAR
coarse alignment
url https://www.mdpi.com/2313-433X/9/12/279
work_keys_str_mv AT suhongyoo coarsealignmentmethodologyofpointcloudbasedoncamerapositionorientationestimationmodel
AT namhoonkim coarsealignmentmethodologyofpointcloudbasedoncamerapositionorientationestimationmodel