AUTOMATIC FUSION OF PARTIAL RECONSTRUCTIONS
Novel image acquisition tools such as micro aerial vehicles (MAVs) in form of quad- or octo-rotor helicopters support the creation of 3D reconstructions with ground sampling distances below 1 cm. The limitation of aerial photogrammetry to nadir and oblique views in heights of several hundred meter...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Copernicus Publications
2012-07-01
|
Series: | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Online Access: | https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/I-3/81/2012/isprsannals-I-3-81-2012.pdf |
Summary: | Novel image acquisition tools such as micro aerial vehicles (MAVs) in form of quad- or octo-rotor helicopters support the creation of
3D reconstructions with ground sampling distances below 1 cm. The limitation of aerial photogrammetry to nadir and oblique views in
heights of several hundred meters is bypassed, allowing close-up photos of facades and ground features. However, the new acquisition
modality also introduces challenges: First, flight space might be restricted in urban areas, which leads to missing views for accurate
3D reconstruction and causes fracturing of large models. This could also happen due to vegetation or simply a change of illumination
during image acquisition. Second, accurate geo-referencing of reconstructions is difficult because of shadowed GPS signals in urban
areas, so alignment based on GPS information is often not possible.
<br><br>
In this paper, we address the automatic fusion of such partial reconstructions. Our approach is largely based on the work of (Wendel
et al., 2011a), but does not require an overhead digital surface model for fusion. Instead, we exploit that patch-based semi-dense
reconstruction of the fractured model typically results in several point clouds covering overlapping areas, even if sparse feature correspondences
cannot be established. We approximate orthographic depth maps for the individual parts and iteratively align them in a
global coordinate system. As a result, we are able to generate point clouds which are visually more appealing and serve as an ideal
basis for further processing. Mismatches between parts of the fused models depend only on the individual point density, which allows
us to achieve a fusion accuracy in the range of ±1 cm on our evaluation dataset. |
---|---|
ISSN: | 2194-9042 2194-9050 |