Video understanding using multimodal deep learning
<p>Our experience of the world is multimodal, however deep learning networks have been traditionally designed for and trained on unimodal inputs such as images, audio segments or text. In this thesis we develop strategies to exploit multimodal information (in the form of vision, text, speech a...
主要作者: | Nagrani, A |
---|---|
其他作者: | Zisserman, A |
格式: | Thesis |
語言: | English |
出版: |
2020
|
主題: |
相似書籍
-
Sign language understanding using multimodal learning
由: Momeni, L
出版: (2024) -
Understanding Multimodal Popularity Prediction of Social Media Videos With Self-Attention
由: Adam Bielski, et al.
出版: (2018-01-01) -
End-to-end learning, and audio-visual human-centric video understanding
由: Brown, A
出版: (2022) -
Holistic image understanding with deep learning and dense random fields
由: Zheng, S
出版: (2016) -
Learning with multimodal self-supervision
由: Chen, H
出版: (2021)