You only look at one: category-level object representations for pose estimation from a single example
In order to meaningfully interact with the world, robot manipulators must be able to interpret objects they encounter. A critical aspect of this interpretation is pose estimation: inferring quantities that describe the position and orientation of an object in 3D space. Most existing approaches to po...
Main Authors: | Goodwin, W, Havoutis, I, Posner, I |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Proceedings of Machine Learning Research
2023
|
Similar Items
-
Zero-shot category-level object pose estimation
by: Goodwin, W, et al.
Published: (2022) -
Accurate Object Pose Estimation Using Depth Only
by: Mingyu Li, et al.
Published: (2018-03-01) -
Semantically grounded object matching for robust robotic scene rearrangement
by: Goodwin, W, et al.
Published: (2022) -
Object Detection for Safety Attire Using YOLO(You Only Look Once)
by: Afifuddin Arif, Shihabuddin Arip, et al.
Published: (2024) -
Lightweight You Only Look Once v8: An Upgraded You Only Look Once v8 Algorithm for Small Object Identification in Unmanned Aerial Vehicle Images
by: Zhongmin Huangfu, et al.
Published: (2023-11-01)