MMATERIC: Multi-Task Learning and Multi-Fusion for AudioText Emotion Recognition in Conversation
The accurate recognition of emotions in conversations helps understand the speaker’s intentions and facilitates various analyses in artificial intelligence, especially in human–computer interaction systems. However, most previous methods need more ability to track the different emotional states of e...
Main Authors: | Xingwei Liang, You Zou, Xinnan Zhuang, Jie Yang, Taiyu Niu, Ruifeng Xu |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-03-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/12/7/1534 |
Similar Items
-
Knowledge enhancement for speech emotion recognition via multi-level acoustic feature
by: Huan Zhao, et al.
Published: (2024-12-01) -
Multi-Hypergraph Neural Networks for Emotion Recognition in Multi-Party Conversations
by: Haojie Xu, et al.
Published: (2023-01-01) -
Multi-Corpus Learning for Audio–Visual Emotions and Sentiment Recognition
by: Elena Ryumina, et al.
Published: (2023-08-01) -
A Multi-Scale Multi-Task Learning Model for Continuous Dimensional Emotion Recognition from Audio
by: Xia Li, et al.
Published: (2022-01-01) -
Multi-Label Multimodal Emotion Recognition With Transformer-Based Fusion and Emotion-Level Representation Learning
by: Hoai-Duy Le, et al.
Published: (2023-01-01)