MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS

Despite the recent success of learning-based monocular depth estimation algorithms and the release of large-scale datasets for training, the methods are limited to depth map prediction and still struggle to yield reliable results in the 3D space without additional scene cues. Indeed, although state-...

Full description

Bibliographic Details
Main Authors: M. Welponer, E. K. Stathopoulou, F. Remondino
Format: Article
Language:English
Published: Copernicus Publications 2022-05-01
Series:The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Online Access:https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2022/469/2022/isprs-archives-XLIII-B2-2022-469-2022.pdf
_version_ 1828346455948001280
author M. Welponer
E. K. Stathopoulou
F. Remondino
author_facet M. Welponer
E. K. Stathopoulou
F. Remondino
author_sort M. Welponer
collection DOAJ
description Despite the recent success of learning-based monocular depth estimation algorithms and the release of large-scale datasets for training, the methods are limited to depth map prediction and still struggle to yield reliable results in the 3D space without additional scene cues. Indeed, although state-of-the-art approaches produce quality depth maps, they generally fail to recover the 3D structure of the scene robustly. This work explores supervised CNN architectures for monocular depth estimation and evaluates their potential in 3D reconstruction. Since most available datasets for training are not designed toward this goal and are limited to specific indoor scenarios, a new metric, large-scale synthetic benchmark (ArchDepth) is introduced that renders near real-world scenarios of outdoor scenes. A encoder-decoder architecture is used for training, and the generalization of the approach is evaluated via depth inference in unseen views in synthetic and real-world scenarios. The depth map predictions are also projected in the 3D space using a separate module. Results are qualitatively and quantitatively evaluated and compared with state-of-the-art algorithms for single image 3D scene recovery.
first_indexed 2024-04-14T00:26:40Z
format Article
id doaj.art-b2314d86ce5a46529b666d6056b666c8
institution Directory Open Access Journal
issn 1682-1750
2194-9034
language English
last_indexed 2024-04-14T00:26:40Z
publishDate 2022-05-01
publisher Copernicus Publications
record_format Article
series The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
spelling doaj.art-b2314d86ce5a46529b666d6056b666c82022-12-22T02:22:41ZengCopernicus PublicationsThe International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences1682-17502194-90342022-05-01XLIII-B2-202246947610.5194/isprs-archives-XLIII-B2-2022-469-2022MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONSM. Welponer0E. K. Stathopoulou1F. Remondino23D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), Trento, Italy3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), Trento, Italy3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), Trento, ItalyDespite the recent success of learning-based monocular depth estimation algorithms and the release of large-scale datasets for training, the methods are limited to depth map prediction and still struggle to yield reliable results in the 3D space without additional scene cues. Indeed, although state-of-the-art approaches produce quality depth maps, they generally fail to recover the 3D structure of the scene robustly. This work explores supervised CNN architectures for monocular depth estimation and evaluates their potential in 3D reconstruction. Since most available datasets for training are not designed toward this goal and are limited to specific indoor scenarios, a new metric, large-scale synthetic benchmark (ArchDepth) is introduced that renders near real-world scenarios of outdoor scenes. A encoder-decoder architecture is used for training, and the generalization of the approach is evaluated via depth inference in unseen views in synthetic and real-world scenarios. The depth map predictions are also projected in the 3D space using a separate module. Results are qualitatively and quantitatively evaluated and compared with state-of-the-art algorithms for single image 3D scene recovery.https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2022/469/2022/isprs-archives-XLIII-B2-2022-469-2022.pdf
spellingShingle M. Welponer
E. K. Stathopoulou
F. Remondino
MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
title MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS
title_full MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS
title_fullStr MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS
title_full_unstemmed MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS
title_short MONOCULAR DEPTH PREDICTION IN PHOTOGRAMMETRIC APPLICATIONS
title_sort monocular depth prediction in photogrammetric applications
url https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2022/469/2022/isprs-archives-XLIII-B2-2022-469-2022.pdf
work_keys_str_mv AT mwelponer monoculardepthpredictioninphotogrammetricapplications
AT ekstathopoulou monoculardepthpredictioninphotogrammetricapplications
AT fremondino monoculardepthpredictioninphotogrammetricapplications