Towards robust and efficient multimodal representation learning and fusion
In the past few years, multimodal learning has made significant progress. The goal of multimodal learning is to create models that can relate and process data from various modalities. One of the challenges is to learn useful representations efficiently given the heterogeneity of the data. Another is...
Main Author: | Guo, Xiaobao |
---|---|
Other Authors: | Kong Wai-Kin Adams |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182226 |
Similar Items
-
Multimodal sentiment analysis using hierarchical fusion with context modeling
by: Majumder, Navonil, et al.
Published: (2020) -
Multimodal fusion for in-car human action recognition
by: He, Hao
Published: (2024) -
Data efficient deep multimodal learning
by: Shen, Meng
Published: (2025) -
Fusing pairwise modalities for emotion recognition in conversations
by: Fan, Chunxiao, et al.
Published: (2024) -
KnowleNet: knowledge fusion network for multimodal sarcasm detection
by: Yue, Tan, et al.
Published: (2023)