Hybrid attention mechanism for few‐shot relational learning of knowledge graphs

Abstract Few‐shot knowledge graph (KG) reasoning is the main focus in the field of knowledge graph reasoning. In order to expand the application fields of the knowledge graph, a large number of studies are based on a large number of training samples. However, we have learnt that there are actually m...

Full description

Bibliographic Details
Main Authors: Ruixin Ma, Zeyang Li, Fangqing Guo, Liang Zhao
Format: Article
Language:English
Published: Wiley 2021-12-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/cvi2.12066
Description
Summary:Abstract Few‐shot knowledge graph (KG) reasoning is the main focus in the field of knowledge graph reasoning. In order to expand the application fields of the knowledge graph, a large number of studies are based on a large number of training samples. However, we have learnt that there are actually many missing relationships or entities in the knowledge graph, and in most cases, there are not many training instances when implementing new relationships. To tackle it, in this study, the authors aim to predict a new entity given few reference instances, even only one training instance. A few‐shot learning framework based on a hybrid attention mechanism is proposed. The framework employs traditional embedding models to extract knowledge, and uses an attenuated attention network and a self‐attention mechanism to obtain the hidden attributes of entities. Thus, it can learn a matching metric by considering both the learnt embeddings and one‐hop graph structures. The experimental results present that the model has achieved significant performance improvements on the NELL‐One and Wiki‐One datasets.
ISSN:1751-9632
1751-9640