Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN
The availability of an enormous quantity of multimodal data and its widespread applications, automatic sentiment analysis and emotion classification in the conversation has become an interesting research topic among the research community. The interlocutor state, context state between the neighborin...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Universidad Internacional de La Rioja (UNIR)
2021-05-01
|
Series: | International Journal of Interactive Multimedia and Artificial Intelligence |
Subjects: | |
Online Access: | https://www.ijimai.org/journal/bibcite/reference/2800 |
_version_ | 1818599787245600768 |
---|---|
author | Mahesh G. Huddar Sanjeev S. Sannakki Vijay S. Rajpurohit |
author_facet | Mahesh G. Huddar Sanjeev S. Sannakki Vijay S. Rajpurohit |
author_sort | Mahesh G. Huddar |
collection | DOAJ |
description | The availability of an enormous quantity of multimodal data and its widespread applications, automatic sentiment analysis and emotion classification in the conversation has become an interesting research topic among the research community. The interlocutor state, context state between the neighboring utterances and multimodal fusion play an important role in multimodal sentiment analysis and emotion detection in conversation. In this article, the recurrent neural network (RNN) based method is developed to capture the interlocutor state and contextual state between the utterances. The pair-wise attention mechanism is used to understand the relationship between the modalities and their importance before fusion. First, two-two combinations of modalities are fused at a time and finally, all the modalities are fused to form the trimodal representation feature vector. The experiments are conducted on three standard datasets such as IEMOCAP, CMU-MOSEI, and CMU-MOSI. The proposed model is evaluated using two metrics such as accuracy and F1-Score and the results demonstrate that the proposed model performs better than the standard baselines. |
first_indexed | 2024-12-16T12:25:08Z |
format | Article |
id | doaj.art-c37fe1c7c2ff474e99cf1aaa50aca1d3 |
institution | Directory Open Access Journal |
issn | 1989-1660 1989-1660 |
language | English |
last_indexed | 2024-12-16T12:25:08Z |
publishDate | 2021-05-01 |
publisher | Universidad Internacional de La Rioja (UNIR) |
record_format | Article |
series | International Journal of Interactive Multimedia and Artificial Intelligence |
spelling | doaj.art-c37fe1c7c2ff474e99cf1aaa50aca1d32022-12-21T22:31:52ZengUniversidad Internacional de La Rioja (UNIR)International Journal of Interactive Multimedia and Artificial Intelligence1989-16601989-16602021-05-016611212110.9781/ijimai.2020.07.004ijimai.2020.07.004Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNNMahesh G. HuddarSanjeev S. SannakkiVijay S. RajpurohitThe availability of an enormous quantity of multimodal data and its widespread applications, automatic sentiment analysis and emotion classification in the conversation has become an interesting research topic among the research community. The interlocutor state, context state between the neighboring utterances and multimodal fusion play an important role in multimodal sentiment analysis and emotion detection in conversation. In this article, the recurrent neural network (RNN) based method is developed to capture the interlocutor state and contextual state between the utterances. The pair-wise attention mechanism is used to understand the relationship between the modalities and their importance before fusion. First, two-two combinations of modalities are fused at a time and finally, all the modalities are fused to form the trimodal representation feature vector. The experiments are conducted on three standard datasets such as IEMOCAP, CMU-MOSEI, and CMU-MOSI. The proposed model is evaluated using two metrics such as accuracy and F1-Score and the results demonstrate that the proposed model performs better than the standard baselines.https://www.ijimai.org/journal/bibcite/reference/2800attention modelinterlocutor statecontext awarenessemotion recognitionmultimodalsentiment analysis |
spellingShingle | Mahesh G. Huddar Sanjeev S. Sannakki Vijay S. Rajpurohit Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN International Journal of Interactive Multimedia and Artificial Intelligence attention model interlocutor state context awareness emotion recognition multimodal sentiment analysis |
title | Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN |
title_full | Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN |
title_fullStr | Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN |
title_full_unstemmed | Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN |
title_short | Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN |
title_sort | attention based multi modal sentiment analysis and emotion detection in conversation using rnn |
topic | attention model interlocutor state context awareness emotion recognition multimodal sentiment analysis |
url | https://www.ijimai.org/journal/bibcite/reference/2800 |
work_keys_str_mv | AT maheshghuddar attentionbasedmultimodalsentimentanalysisandemotiondetectioninconversationusingrnn AT sanjeevssannakki attentionbasedmultimodalsentimentanalysisandemotiondetectioninconversationusingrnn AT vijaysrajpurohit attentionbasedmultimodalsentimentanalysisandemotiondetectioninconversationusingrnn |