Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction

Advances in visual sensor devices and computing power are revolutionising the interaction of robots with their environment. Cameras that capture depth information along with a common colour image play a significant role. These devices are cheap, small, and fairly precise. The information provided, p...

Full description

Bibliographic Details
Main Authors: Ian de Medeiros Esper, Oleh Smolkin, Maksym Manko, Anton Popov, Pål Johan From, Alex Mason
Format: Article
Language:English
Published: MDPI AG 2022-04-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/12/9/4134
_version_ 1797505785431851008
author Ian de Medeiros Esper
Oleh Smolkin
Maksym Manko
Anton Popov
Pål Johan From
Alex Mason
author_facet Ian de Medeiros Esper
Oleh Smolkin
Maksym Manko
Anton Popov
Pål Johan From
Alex Mason
author_sort Ian de Medeiros Esper
collection DOAJ
description Advances in visual sensor devices and computing power are revolutionising the interaction of robots with their environment. Cameras that capture depth information along with a common colour image play a significant role. These devices are cheap, small, and fairly precise. The information provided, particularly point clouds, can be generated in a virtual computing environment, providing complete 3D information for applications. However, off-the-shelf cameras often have a limited field of view, both on the horizontal and vertical axis. In larger environments, it is therefore often necessary to combine information from several cameras or positions. To concatenate multiple point clouds and generate the complete environment information, the pose of each camera must be known in the outer scene, i.e., they must reference a common coordinate system. To achieve this, a coordinate system must be defined, and then every device must be positioned according to this coordinate system. For cameras, a calibration can be performed to find its pose in relation to this coordinate system. Several calibration methods have been proposed to solve this challenge, ranging from structured objects such as chessboards to features in the environment. In this study, we investigate how three different pose estimation methods for multi-camera perspectives perform when reconstructing a scene in 3D. We evaluate the usage of a charuco cube, a double-sided charuco board, and a robot’s tool centre point (TCP) position in a real usage case, where precision is a key point for the system. We define a methodology to identify the points in the 3D space and measure the root-mean-square error (RMSE) based on the Euclidean distance of the actual point to a generated ground-truth point. The reconstruction carried out using the robot’s TCP position produced the best result, followed by the charuco cuboid; the double-sided angled charuco board exhibited the worst performance.
first_indexed 2024-03-10T04:23:17Z
format Article
id doaj.art-d808855021814585a08284ddd91e72c0
institution Directory Open Access Journal
issn 2076-3417
language English
last_indexed 2024-03-10T04:23:17Z
publishDate 2022-04-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj.art-d808855021814585a08284ddd91e72c02023-11-23T07:44:44ZengMDPI AGApplied Sciences2076-34172022-04-01129413410.3390/app12094134Evaluation of RGB-D Multi-Camera Pose Estimation for 3D ReconstructionIan de Medeiros Esper0Oleh Smolkin1Maksym Manko2Anton Popov3Pål Johan From4Alex Mason5Faculty of Science and Technology, Norwegian Univiersity of Life Sciences, Universitetstunet 3, 1430 Ås, NorwayCiklum Data & Analytics, 03680 Kyiv, UkraineCiklum Data & Analytics, 03680 Kyiv, UkraineCiklum Data & Analytics, 03680 Kyiv, UkraineFaculty of Science and Technology, Norwegian Univiersity of Life Sciences, Universitetstunet 3, 1430 Ås, NorwayFaculty of Science and Technology, Norwegian Univiersity of Life Sciences, Universitetstunet 3, 1430 Ås, NorwayAdvances in visual sensor devices and computing power are revolutionising the interaction of robots with their environment. Cameras that capture depth information along with a common colour image play a significant role. These devices are cheap, small, and fairly precise. The information provided, particularly point clouds, can be generated in a virtual computing environment, providing complete 3D information for applications. However, off-the-shelf cameras often have a limited field of view, both on the horizontal and vertical axis. In larger environments, it is therefore often necessary to combine information from several cameras or positions. To concatenate multiple point clouds and generate the complete environment information, the pose of each camera must be known in the outer scene, i.e., they must reference a common coordinate system. To achieve this, a coordinate system must be defined, and then every device must be positioned according to this coordinate system. For cameras, a calibration can be performed to find its pose in relation to this coordinate system. Several calibration methods have been proposed to solve this challenge, ranging from structured objects such as chessboards to features in the environment. In this study, we investigate how three different pose estimation methods for multi-camera perspectives perform when reconstructing a scene in 3D. We evaluate the usage of a charuco cube, a double-sided charuco board, and a robot’s tool centre point (TCP) position in a real usage case, where precision is a key point for the system. We define a methodology to identify the points in the 3D space and measure the root-mean-square error (RMSE) based on the Euclidean distance of the actual point to a generated ground-truth point. The reconstruction carried out using the robot’s TCP position produced the best result, followed by the charuco cuboid; the double-sided angled charuco board exhibited the worst performance.https://www.mdpi.com/2076-3417/12/9/4134pose estimationrobotics3D reconstructioncharuco cuboid
spellingShingle Ian de Medeiros Esper
Oleh Smolkin
Maksym Manko
Anton Popov
Pål Johan From
Alex Mason
Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction
Applied Sciences
pose estimation
robotics
3D reconstruction
charuco cuboid
title Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction
title_full Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction
title_fullStr Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction
title_full_unstemmed Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction
title_short Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction
title_sort evaluation of rgb d multi camera pose estimation for 3d reconstruction
topic pose estimation
robotics
3D reconstruction
charuco cuboid
url https://www.mdpi.com/2076-3417/12/9/4134
work_keys_str_mv AT iandemedeirosesper evaluationofrgbdmulticameraposeestimationfor3dreconstruction
AT olehsmolkin evaluationofrgbdmulticameraposeestimationfor3dreconstruction
AT maksymmanko evaluationofrgbdmulticameraposeestimationfor3dreconstruction
AT antonpopov evaluationofrgbdmulticameraposeestimationfor3dreconstruction
AT paljohanfrom evaluationofrgbdmulticameraposeestimationfor3dreconstruction
AT alexmason evaluationofrgbdmulticameraposeestimationfor3dreconstruction