Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis

In open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling...

Full description

Bibliographic Details
Main Authors: Qiuyu Pan, Zuqiang Meng
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/13/3/662
_version_ 1797318878761582592
author Qiuyu Pan
Zuqiang Meng
author_facet Qiuyu Pan
Zuqiang Meng
author_sort Qiuyu Pan
collection DOAJ
description In open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling the model to provide accurate prediction when it is confident and to indicate high uncertainty when its prediction is likely to be inaccurate. In this paper, we propose an uncertain-aware late fusion based on hybrid uncertainty calibration (ULF-HUC). Firstly, we conduct in-depth research on the issue of sentiment polarity distribution in MSA datasets, establishing a foundation for an uncertain-aware late fusion method, which facilitates organic fusion of modalities. Then, we propose a hybrid uncertainty calibration method based on evidential deep learning (EDL) that balances accuracy and uncertainty, supporting the reduction of uncertainty in each modality of the model. Finally, we add two common types of noise to validate the effectiveness of our proposed method. We evaluate our model on three publicly available MSA datasets (MVSA-Single, MVSA-Multiple, and MVSA-Single-Small). Our method outperforms state-of-the-art approaches in terms of accuracy, weighted F1 score, and expected uncertainty calibration error (UCE) metrics, proving the effectiveness of the proposed method.
first_indexed 2024-03-08T03:58:47Z
format Article
id doaj.art-92d03aaaa8754b0da4c043d7d64bb5b3
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-03-08T03:58:47Z
publishDate 2024-02-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-92d03aaaa8754b0da4c043d7d64bb5b32024-02-09T15:11:00ZengMDPI AGElectronics2079-92922024-02-0113366210.3390/electronics13030662Hybrid Uncertainty Calibration for Multimodal Sentiment AnalysisQiuyu Pan0Zuqiang Meng1School of Computer and Electronic Information, Guangxi University, Nanning 530004, ChinaSchool of Computer and Electronic Information, Guangxi University, Nanning 530004, ChinaIn open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling the model to provide accurate prediction when it is confident and to indicate high uncertainty when its prediction is likely to be inaccurate. In this paper, we propose an uncertain-aware late fusion based on hybrid uncertainty calibration (ULF-HUC). Firstly, we conduct in-depth research on the issue of sentiment polarity distribution in MSA datasets, establishing a foundation for an uncertain-aware late fusion method, which facilitates organic fusion of modalities. Then, we propose a hybrid uncertainty calibration method based on evidential deep learning (EDL) that balances accuracy and uncertainty, supporting the reduction of uncertainty in each modality of the model. Finally, we add two common types of noise to validate the effectiveness of our proposed method. We evaluate our model on three publicly available MSA datasets (MVSA-Single, MVSA-Multiple, and MVSA-Single-Small). Our method outperforms state-of-the-art approaches in terms of accuracy, weighted F1 score, and expected uncertainty calibration error (UCE) metrics, proving the effectiveness of the proposed method.https://www.mdpi.com/2079-9292/13/3/662hybrid uncertainty calibrationmultimodal sentiment analysisuncertainty-aware late fusionexpected uncertainty calibration errornoise
spellingShingle Qiuyu Pan
Zuqiang Meng
Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
Electronics
hybrid uncertainty calibration
multimodal sentiment analysis
uncertainty-aware late fusion
expected uncertainty calibration error
noise
title Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
title_full Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
title_fullStr Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
title_full_unstemmed Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
title_short Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
title_sort hybrid uncertainty calibration for multimodal sentiment analysis
topic hybrid uncertainty calibration
multimodal sentiment analysis
uncertainty-aware late fusion
expected uncertainty calibration error
noise
url https://www.mdpi.com/2079-9292/13/3/662
work_keys_str_mv AT qiuyupan hybriduncertaintycalibrationformultimodalsentimentanalysis
AT zuqiangmeng hybriduncertaintycalibrationformultimodalsentimentanalysis