Connecting Touch and Vision via Cross-Modal Prediction
© 2019 IEEE. Humans perceive the world using multi-modal sensory inputs such as vision, audition, and touch. In this work, we investigate the cross-modal connection between vision and touch. The main challenge in this cross-domain modeling task lies in the significant scale discrepancy between the t...
Main Authors: | Li, Yunzhu, Zhu, Jun-Yan, Tedrake, Russ, Torralba, Antonio |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2021
|
Online Access: | https://hdl.handle.net/1721.1/137632 |
Similar Items
-
Tracking objects with point clouds from vision and touch
by: Izatt, Gregory R., et al.
Published: (2017) -
Cross-Modal Scene Networks
by: Aytar, Yusuf, et al.
Published: (2021) -
CONNECTING WITH A HUMAN TOUCH
by: MPRC, Pusat Media & Perhubungan Awam
Published: (2016) -
Cross-Modal Scene Networks
by: Aytar, Yusuf, et al.
Published: (2022) -
Motion Aftereffects Transfer between Touch and Vision
by: Konkle, Talia A., et al.
Published: (2015)