DISOCCLUSION OF 3D LIDAR POINT CLOUDS USING RANGE IMAGES
This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Copernicus Publications
2017-05-01
|
Series: | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Online Access: | http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-1-W1/75/2017/isprs-annals-IV-1-W1-75-2017.pdf |
Summary: | This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile
Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an
alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of
disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned
into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on
depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then
used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real
data prove the effectiveness of this procedure both in terms of accuracy and speed. |
---|---|
ISSN: | 2194-9042 2194-9050 |