Robust Multimodal Emotion Recognition from Conversation with Transformer-Based Crossmodality Fusion
Decades of scientific research have been conducted on developing and evaluating methods for automated emotion recognition. With exponentially growing technology, there is a wide range of emerging applications that require emotional state recognition of the user. This paper investigates a robust appr...
Main Authors: | Baijun Xie, Mariia Sidulova, Chung Hyuk Park |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-07-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/14/4913 |
Similar Items
-
Multimodal Attention Network for Continuous-Time Emotion Recognition Using Video and EEG Signals
by: Dong Yoon Choi, et al.
Published: (2020-01-01) -
A Framework to Evaluate Fusion Methods for Multimodal Emotion Recognition
by: Diego Pena, et al.
Published: (2023-01-01) -
Multimodal Emotion Recognition Fusion Analysis Adapting BERT With Heterogeneous Feature Unification
by: Sanghyun Lee, et al.
Published: (2021-01-01) -
Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion
by: Yucel Cimtay, et al.
Published: (2020-01-01) -
Multimodal Emotion Recognition and Sentiment Analysis Using Masked Attention and Multimodal Interaction
by: Tatiana Voloshina, et al.
Published: (2023-05-01)