Fake News Detection via Multi-Modal Topic Memory Network

With the development of the Mobile Internet, more and more people create and release multi-modal posts on social media platforms. Fake news detection has become an increasingly challenging task. Although many current works focus on constructing models extracting abstract features from the content of...

Full description

Bibliographic Details
Main Authors: Long Ying, Hui Yu, Jinguang Wang, Yongze Ji, Shengsheng Qian
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9541112/
_version_ 1819133803792171008
author Long Ying
Hui Yu
Jinguang Wang
Yongze Ji
Shengsheng Qian
author_facet Long Ying
Hui Yu
Jinguang Wang
Yongze Ji
Shengsheng Qian
author_sort Long Ying
collection DOAJ
description With the development of the Mobile Internet, more and more people create and release multi-modal posts on social media platforms. Fake news detection has become an increasingly challenging task. Although many current works focus on constructing models extracting abstract features from the content of each post, they neglect the intrinsic semantic architecture such as latent topics, etc. These models only learn patterns in content coupled with certain specific latent topics on the training set to distinguish real and fake posts, which will suffer generalization and discriminating ability decline, especially when posts are associated with rare or new topics. Moreover, most existing works using deep schemes to extract and integrate textual and visual representation in post have not effectively modeled and sufficiently utilized the complementary and noisy multi-modal information containing semantic concepts and entities to complement and enhance each modal. In this paper, to deal with the above problems, we propose a novel end-to-end Multi-modal Topic Memory Network (MTMN), which obtains and combines post representations shared across latent topics together with global features of latent topics while modeling intra-modality and inter-modality information in a unified framework. (1) To tackle real scenarios where newly arriving posts with different topic distribution from the training data, our method incorporates a topic memory module to explicitly characterize final representation as post feature shared across topics and global features of latent topics. These two kinds of features are jointly learned and then combined to generate robust representation. (2) To effectively integrate multi-modality information in posts, we propose a novel blended attention module for multi-modal fusion, which can simultaneously exploit the intra-modality relation within each modal and the inter-modality relation between text words and image regions to complement and enhance each other for high-quality representation. Extensive experiments on two public real-world datasets demonstrate the superior performance of MTMN compared with other state-of-the-art algorithms.
first_indexed 2024-12-22T09:53:06Z
format Article
id doaj.art-4890eb3350c14035ae7a179d56bc92e5
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-22T09:53:06Z
publishDate 2021-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-4890eb3350c14035ae7a179d56bc92e52022-12-21T18:30:20ZengIEEEIEEE Access2169-35362021-01-01913281813282910.1109/ACCESS.2021.31139819541112Fake News Detection via Multi-Modal Topic Memory NetworkLong Ying0https://orcid.org/0000-0001-6834-5441Hui Yu1Jinguang Wang2Yongze Ji3Shengsheng Qian4https://orcid.org/0000-0001-9488-2208School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, ChinaSchool of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, ChinaSchool of Computer Science and Information Engineering, Hefei University of Technology, Hefei, ChinaSchool of Information Science and Engineering, China University of Petroleum, Beijing, ChinaNational Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, ChinaWith the development of the Mobile Internet, more and more people create and release multi-modal posts on social media platforms. Fake news detection has become an increasingly challenging task. Although many current works focus on constructing models extracting abstract features from the content of each post, they neglect the intrinsic semantic architecture such as latent topics, etc. These models only learn patterns in content coupled with certain specific latent topics on the training set to distinguish real and fake posts, which will suffer generalization and discriminating ability decline, especially when posts are associated with rare or new topics. Moreover, most existing works using deep schemes to extract and integrate textual and visual representation in post have not effectively modeled and sufficiently utilized the complementary and noisy multi-modal information containing semantic concepts and entities to complement and enhance each modal. In this paper, to deal with the above problems, we propose a novel end-to-end Multi-modal Topic Memory Network (MTMN), which obtains and combines post representations shared across latent topics together with global features of latent topics while modeling intra-modality and inter-modality information in a unified framework. (1) To tackle real scenarios where newly arriving posts with different topic distribution from the training data, our method incorporates a topic memory module to explicitly characterize final representation as post feature shared across topics and global features of latent topics. These two kinds of features are jointly learned and then combined to generate robust representation. (2) To effectively integrate multi-modality information in posts, we propose a novel blended attention module for multi-modal fusion, which can simultaneously exploit the intra-modality relation within each modal and the inter-modality relation between text words and image regions to complement and enhance each other for high-quality representation. Extensive experiments on two public real-world datasets demonstrate the superior performance of MTMN compared with other state-of-the-art algorithms.https://ieeexplore.ieee.org/document/9541112/Fake news detectionmulti-modal fusiontopic memory networkblended attention module
spellingShingle Long Ying
Hui Yu
Jinguang Wang
Yongze Ji
Shengsheng Qian
Fake News Detection via Multi-Modal Topic Memory Network
IEEE Access
Fake news detection
multi-modal fusion
topic memory network
blended attention module
title Fake News Detection via Multi-Modal Topic Memory Network
title_full Fake News Detection via Multi-Modal Topic Memory Network
title_fullStr Fake News Detection via Multi-Modal Topic Memory Network
title_full_unstemmed Fake News Detection via Multi-Modal Topic Memory Network
title_short Fake News Detection via Multi-Modal Topic Memory Network
title_sort fake news detection via multi modal topic memory network
topic Fake news detection
multi-modal fusion
topic memory network
blended attention module
url https://ieeexplore.ieee.org/document/9541112/
work_keys_str_mv AT longying fakenewsdetectionviamultimodaltopicmemorynetwork
AT huiyu fakenewsdetectionviamultimodaltopicmemorynetwork
AT jinguangwang fakenewsdetectionviamultimodaltopicmemorynetwork
AT yongzeji fakenewsdetectionviamultimodaltopicmemorynetwork
AT shengshengqian fakenewsdetectionviamultimodaltopicmemorynetwork