Kintinuous: Spatially Extended KinectFusion

In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynami...

Full description

Bibliographic Details
Main Authors: Whelan, Thomas, Kaess, Michael, Fallon, Maurice, Johannsson, Hordur, Leonard, John, McDonald, John
Other Authors: John Leonard
Published: 2012
Online Access:http://hdl.handle.net/1721.1/71756
_version_ 1826217535429672960
author Whelan, Thomas
Kaess, Michael
Fallon, Maurice
Johannsson, Hordur
Leonard, John
McDonald, John
author2 John Leonard
author_facet John Leonard
Whelan, Thomas
Kaess, Michael
Fallon, Maurice
Johannsson, Hordur
Leonard, John
McDonald, John
author_sort Whelan, Thomas
collection MIT
description In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system's ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation.
first_indexed 2024-09-23T17:05:13Z
id mit-1721.1/71756
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T17:05:13Z
publishDate 2012
record_format dspace
spelling mit-1721.1/717562019-04-12T15:41:15Z Kintinuous: Spatially Extended KinectFusion Whelan, Thomas Kaess, Michael Fallon, Maurice Johannsson, Hordur Leonard, John McDonald, John John Leonard Marine Robotics In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system's ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation. 2012-07-23T17:45:03Z 2012-07-23T17:45:03Z 2012-07-19 http://hdl.handle.net/1721.1/71756 MIT-CSAIL-TR-2012-020 8 p. application/pdf
spellingShingle Whelan, Thomas
Kaess, Michael
Fallon, Maurice
Johannsson, Hordur
Leonard, John
McDonald, John
Kintinuous: Spatially Extended KinectFusion
title Kintinuous: Spatially Extended KinectFusion
title_full Kintinuous: Spatially Extended KinectFusion
title_fullStr Kintinuous: Spatially Extended KinectFusion
title_full_unstemmed Kintinuous: Spatially Extended KinectFusion
title_short Kintinuous: Spatially Extended KinectFusion
title_sort kintinuous spatially extended kinectfusion
url http://hdl.handle.net/1721.1/71756
work_keys_str_mv AT whelanthomas kintinuousspatiallyextendedkinectfusion
AT kaessmichael kintinuousspatiallyextendedkinectfusion
AT fallonmaurice kintinuousspatiallyextendedkinectfusion
AT johannssonhordur kintinuousspatiallyextendedkinectfusion
AT leonardjohn kintinuousspatiallyextendedkinectfusion
AT mcdonaldjohn kintinuousspatiallyextendedkinectfusion