Self-supervised utterance order prediction for emotion recognition in conversations

As the order of the utterances in a conversation changes, the meaning of the utterance also changes, and sometimes, this will cause different semantics or emotions. However, the existing representation learning models do not pay close attention to capturing the internal semantic differences of utter...

Full description

Bibliographic Details
Main Authors: Jiang, Dazhi, Liu, Hao, Tu, Geng, Wei, Runguo, Cambria, Erik
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175849
Description
Summary:As the order of the utterances in a conversation changes, the meaning of the utterance also changes, and sometimes, this will cause different semantics or emotions. However, the existing representation learning models do not pay close attention to capturing the internal semantic differences of utterance caused by the change of utterance order. Based on this, we build a self-supervised utterance order prediction approach to learn the logical order of utterance, which helps understand the deep semantic relationship between adjacent utterances. Specially, the utterance binary composed of two adjacent utterances, which are ordered or disordered, is fed to the self-supervised model so that the self-supervised model can obtain firm representation learning ability for the semantic differences of the adjacent sentences. The self-supervised method is applied to the downstream conversation emotion recognition task to test the value of the approach. The features extracted from the self-supervised model are fused with the multimodal features to obtain a richer utterance representation. After that, emotion recognition models are applied to two different datasets. The experiment results show that our proposed approach outperforms the current state of the art on ERC benchmark datasets.