DragAPart: learning a part-level motion prior for articulated objects

We introduce DragAPart, a method that, given an image and a set of drags as input, generates a new image of the same object that responds to the action of the drags. Differently from prior works that focused on repositioning objects, DragAPart predicts part-level interactions, such as opening and cl...

詳細記述

書誌詳細
主要な著者: Li, R, Zheng, C, Rupprecht, C, Vedaldi, A
フォーマット: Internet publication
言語:English
出版事項: 2024