Building 3D Generative Models from Minimal Data
Abstract We propose a method for constructing generative models of 3D objects from a single 3D mesh and improving them through unsupervised low-shot learning from 2D images. Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes. Whereas...
Main Authors: | Sutherland, Skylar, Egger, Bernhard, Tenenbaum, Joshua |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences |
Format: | Article |
Language: | English |
Published: |
Springer US
2023
|
Online Access: | https://hdl.handle.net/1721.1/152192 |
Similar Items
-
Building 3D Morphable Models from a Single Scan
by: Sutherland, Skylar, et al.
Published: (2023) -
Identity-Expression Ambiguity in 3D Morphable Face Models
by: Egger, Bernhard, et al.
Published: (2023) -
Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks
by: Soltani, Amir Arsalan, et al.
Published: (2020) -
MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation
by: Medin, Safa C, et al.
Published: (2023) -
Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling
by: Wu, Jiajun, et al.
Published: (2017)