Learning 3D object categories by looking around them
Traditional approaches for learning 3D object categories use either synthetic data or manual supervision. In this paper, we propose a method which does not require manual annotations and is instead cued by observing objects from a moving vantage point. Our system builds on two innovations: a Siamese...
主要な著者: | Novotny, D, Larlus, D, Vedaldi, A |
---|---|
フォーマット: | Conference item |
言語: | English |
出版事項: |
IEEE
2017
|
類似資料
-
Capturing the geometry of object categories from video supervision
著者:: Novotny, D, 等
出版事項: (2018) -
I have seen enough: Transferring parts across categories
著者:: Novotny, D, 等
出版事項: (2016) -
Learning the semantic structure of objects from Web supervision
著者:: Novotny, D, 等
出版事項: (2016) -
Unsupervised learning of 3D object categories from videos in the wild
著者:: Henzler, P, 等
出版事項: (2021) -
NeuralDiff: Segmenting 3D objects that move in egocentric videos
著者:: Tschernezki, V, 等
出版事項: (2022)