End-to-End Modeling and Transfer Learning for Audiovisual Emotion Recognition in-the-Wild
As emotions play a central role in human communication, automatic emotion recognition has attracted increasing attention in the last two decades. While multimodal systems enjoy high performances on lab-controlled data, they are still far from providing ecological validity on non-lab-controlled, name...
Main Authors: | Denis Dresvyanskiy, Elena Ryumina, Heysem Kaya, Maxim Markitantov, Alexey Karpov, Wolfgang Minker |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-01-01
|
Series: | Multimodal Technologies and Interaction |
Subjects: | |
Online Access: | https://www.mdpi.com/2414-4088/6/2/11 |
Similar Items
-
Multi-Corpus Learning for Audio–Visual Emotions and Sentiment Recognition
by: Elena Ryumina, et al.
Published: (2023-08-01) -
Exploiting EEG Signals and Audiovisual Feature Fusion for Video Emotion Recognition
by: Baixi Xing, et al.
Published: (2019-01-01) -
Deep Multimodal Representation Learning: A Survey
by: Wenzhong Guo, et al.
Published: (2019-01-01) -
A Hybrid Latent Space Data Fusion Method for Multimodal Emotion Recognition
by: Shahla Nemati, et al.
Published: (2019-01-01) -
Deep Multimodal Emotion Recognition on Human Speech: A Review
by: Panagiotis Koromilas, et al.
Published: (2021-08-01)