Sentiment Analysis of Social Media via Multimodal Feature Fusion

In recent years, with the popularity of social media, users are increasingly keen to express their feelings and opinions in the form of pictures and text, which makes multimodal data with text and pictures the con tent type with the most growth. Most of the information posted by users on social medi...

Full description

Bibliographic Details
Main Authors: Kang Zhang, Yushui Geng, Jing Zhao, Jianxin Liu, Wenxiao Li
Format: Article
Language:English
Published: MDPI AG 2020-12-01
Series:Symmetry
Subjects:
Online Access:https://www.mdpi.com/2073-8994/12/12/2010
Description
Summary:In recent years, with the popularity of social media, users are increasingly keen to express their feelings and opinions in the form of pictures and text, which makes multimodal data with text and pictures the con tent type with the most growth. Most of the information posted by users on social media has obvious sentimental aspects, and multimodal sentiment analysis has become an important research field. Previous studies on multimodal sentiment analysis have primarily focused on extracting text and image features separately and then combining them for sentiment classification. These studies often ignore the interaction between text and images. Therefore, this paper proposes a new multimodal sentiment analysis model. The model first eliminates noise interference in textual data and extracts more important image features. Then, in the feature-fusion part based on the attention mechanism, the text and images learn the internal features from each other through symmetry. Then the fusion features are applied to sentiment classification tasks. The experimental results on two common multimodal sentiment datasets demonstrate the effectiveness of the proposed model.
ISSN:2073-8994