Knowledge-Enhanced Graph Attention Network for Fact Verification

Fact verification aims to evaluate the authenticity of a given claim based on the evidence sentences retrieved from Wikipedia articles. Existing works mainly leverage the natural language inference methods to model the semantic interaction of claim and evidence, or further employ the graph structure...

Full description

Bibliographic Details
Main Authors: Chonghao Chen, Jianming Zheng, Honghui Chen
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/9/16/1949
_version_ 1797523014620807168
author Chonghao Chen
Jianming Zheng
Honghui Chen
author_facet Chonghao Chen
Jianming Zheng
Honghui Chen
author_sort Chonghao Chen
collection DOAJ
description Fact verification aims to evaluate the authenticity of a given claim based on the evidence sentences retrieved from Wikipedia articles. Existing works mainly leverage the natural language inference methods to model the semantic interaction of claim and evidence, or further employ the graph structure to capture the relation features between multiple evidences. However, previous methods have limited representation ability in encoding complicated units of claim and evidences, and thus cannot support sophisticated reasoning. In addition, a limited amount of supervisory signals lead to the graph encoder could not distinguish the distinctions of different graph structures and weaken the encoding ability. To address the above issues, we propose a Knowledge-Enhanced Graph Attention network (KEGA) for fact verification, which introduces a knowledge integration module to enhance the representation of claims and evidences by incorporating external knowledge. Moreover, KEGA leverages an auxiliary loss based on contrastive learning to fine-tune the graph attention encoder and learn the discriminative features for the evidence graph. Comprehensive experiments conducted on FEVER, a large-scale benchmark dataset for fact verification, demonstrate the superiority of our proposal in both the multi-evidences and single-evidence scenarios. In addition, our findings show that the background knowledge for words can effectively improve the model performance.
first_indexed 2024-03-10T08:37:25Z
format Article
id doaj.art-bde516d59a564fc7bda678c1205c20bd
institution Directory Open Access Journal
issn 2227-7390
language English
last_indexed 2024-03-10T08:37:25Z
publishDate 2021-08-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj.art-bde516d59a564fc7bda678c1205c20bd2023-11-22T08:34:19ZengMDPI AGMathematics2227-73902021-08-01916194910.3390/math9161949Knowledge-Enhanced Graph Attention Network for Fact VerificationChonghao Chen0Jianming Zheng1Honghui Chen2Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, ChinaScience and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, ChinaScience and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, ChinaFact verification aims to evaluate the authenticity of a given claim based on the evidence sentences retrieved from Wikipedia articles. Existing works mainly leverage the natural language inference methods to model the semantic interaction of claim and evidence, or further employ the graph structure to capture the relation features between multiple evidences. However, previous methods have limited representation ability in encoding complicated units of claim and evidences, and thus cannot support sophisticated reasoning. In addition, a limited amount of supervisory signals lead to the graph encoder could not distinguish the distinctions of different graph structures and weaken the encoding ability. To address the above issues, we propose a Knowledge-Enhanced Graph Attention network (KEGA) for fact verification, which introduces a knowledge integration module to enhance the representation of claims and evidences by incorporating external knowledge. Moreover, KEGA leverages an auxiliary loss based on contrastive learning to fine-tune the graph attention encoder and learn the discriminative features for the evidence graph. Comprehensive experiments conducted on FEVER, a large-scale benchmark dataset for fact verification, demonstrate the superiority of our proposal in both the multi-evidences and single-evidence scenarios. In addition, our findings show that the background knowledge for words can effectively improve the model performance.https://www.mdpi.com/2227-7390/9/16/1949fact verificationexternal knowledgegraph attention networkcontrastive learning
spellingShingle Chonghao Chen
Jianming Zheng
Honghui Chen
Knowledge-Enhanced Graph Attention Network for Fact Verification
Mathematics
fact verification
external knowledge
graph attention network
contrastive learning
title Knowledge-Enhanced Graph Attention Network for Fact Verification
title_full Knowledge-Enhanced Graph Attention Network for Fact Verification
title_fullStr Knowledge-Enhanced Graph Attention Network for Fact Verification
title_full_unstemmed Knowledge-Enhanced Graph Attention Network for Fact Verification
title_short Knowledge-Enhanced Graph Attention Network for Fact Verification
title_sort knowledge enhanced graph attention network for fact verification
topic fact verification
external knowledge
graph attention network
contrastive learning
url https://www.mdpi.com/2227-7390/9/16/1949
work_keys_str_mv AT chonghaochen knowledgeenhancedgraphattentionnetworkforfactverification
AT jianmingzheng knowledgeenhancedgraphattentionnetworkforfactverification
AT honghuichen knowledgeenhancedgraphattentionnetworkforfactverification