Neural fields for co-reconstructing 3D objects from incidental 2D data

We ask whether 3D objects can be reconstructed from real world data collected for some other purpose, such as autonomous driving or augmented reality, thus inferring objects only incidentally. 3D reconstruction from incidental data is a major challenge because, in addition to significant noise, only...

Full description

Bibliographic Details
Main Authors: Campbell, D, Insafutdinov, E, Henriques, JF, Vedaldi, A
Format: Conference item
Language:English
Published: IEEE 2024
Description
Summary:We ask whether 3D objects can be reconstructed from real world data collected for some other purpose, such as autonomous driving or augmented reality, thus inferring objects only incidentally. 3D reconstruction from incidental data is a major challenge because, in addition to significant noise, only a few views of each object are observed, which are insufficient for reconstruction. We approach this problem as a co-reconstruction task, where multiple objects are reconstructed together, learning shape and appearance priors for regularization. In order to do so, we introduce a neural radiance field that is conditioned via an attention mechanism on the identity of the individual objects. We further disentangle shape from appearance and diffuse color from specular color via an asymmetric two-stream network, which factors shared information from instance-specific details. We demonstrate the ability of this method to reconstruct full 3D objects from partial, incidental observations in autonomous driving and other datasets.