Capturing the geometry of object categories from video supervision

In this article, we are interested in capturing the 3D geometry of object categories simply by looking around them. Our unsupervised method fundamentally departs from traditional approaches that require either CAD models or manual supervision. It only uses video sequences capturing a handful of inst...

Full description

Bibliographic Details
Main Authors: Novotny, D, Larlus, D, Vedaldi, A
Format: Journal article
Language:English
Published: Institute of Electrical and Electronics Engineers 2018
Description
Summary:In this article, we are interested in capturing the 3D geometry of object categories simply by looking around them. Our unsupervised method fundamentally departs from traditional approaches that require either CAD models or manual supervision. It only uses video sequences capturing a handful of instances of an object category to train a deep architecture tailored for extracting 3D geometry predictions. Our deep architecture has three components. First, a Siamese viewpoint factorization network robustly aligns the input videos and, as a consequence, learns to predict the absolute category-specific viewpoint from a single image depicting any previously unseen instance of that category. Second, a depth estimation network performs monocular depth prediction. Finally, a 3D shape completion network predicts the full shape of the depicted object instance by re-using the output of the monocular depth prediction module. We also propose a way to configure networks so they can perform probabilistic predictions. We demonstrate that, properly used in our framework, this self-assessment mechanism is crucial for obtaining high quality predictions. Our network achieves state-of-the-art results on viewpoint prediction, depth estimation, and 3D point cloud estimation on public benchmarks.