Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention
Emotion recognition has been gaining attention in recent years due to its applications on artificial agents. To achieve a good performance with this task, much research has been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However...
Main Authors: | Changzeng Fu, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-08-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/20/17/4894 |
Similar Items
-
Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network
by: Jiaqi Shi, et al.
Published: (2020-12-01) -
A Multi-Modal Entity Alignment Method with Inter-Modal Enhancement
by: Song Yuan, et al.
Published: (2023-04-01) -
Facial Emotion Recognition with Inter-Modality-Attention-Transformer-Based Self-Supervised Learning
by: Aayushi Chaudhari, et al.
Published: (2023-01-01) -
Mixture of Attention Variants for Modal Fusion in Multi-Modal Sentiment Analysis
by: Chao He, et al.
Published: (2024-01-01) -
Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning
by: Thi-Dung Tran, et al.
Published: (2023-01-01)