Multimodal Affect Models: An Investigation of Relative Salience of Audio and Visual Cues for Emotion Prediction
People perceive emotions via multiple cues, predominantly speech and visual cues, and a number of emotion recognition systems utilize both audio and visual cues. Moreover, the perception of static aspects of emotion (speaker's arousal level is high/low) and the dynamic aspects of emotion (speak...
Main Authors: | Jingyao Wu, Ting Dang, Vidhyasaharan Sethu, Eliathamby Ambikairajah |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-12-01
|
Series: | Frontiers in Computer Science |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fcomp.2021.767767/full |
Similar Items
-
Multimodal Emotion Recognition and Sentiment Analysis Using Masked Attention and Multimodal Interaction
by: Tatiana Voloshina, et al.
Published: (2023-05-01) -
Deep Multimodal Emotion Recognition on Human Speech: A Review
by: Panagiotis Koromilas, et al.
Published: (2021-08-01) -
Training Emotion Recognition Accuracy: Results for Multimodal Expressions and Facial Micro Expressions
by: Lillian Döllinger, et al.
Published: (2021-08-01) -
A Framework to Evaluate Fusion Methods for Multimodal Emotion Recognition
by: Diego Pena, et al.
Published: (2023-01-01) -
Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains
by: Antje B M Gerdes, et al.
Published: (2014-12-01)