Adaptive negative representations for graph contrastive learning
Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in t...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
KeAi Communications Co. Ltd.
2024-01-01
|
Series: | AI Open |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2666651023000219 |
_version_ | 1797257704374272000 |
---|---|
author | Qi Zhang Cheng Yang Chuan Shi |
author_facet | Qi Zhang Cheng Yang Chuan Shi |
author_sort | Qi Zhang |
collection | DOAJ |
description | Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL. |
first_indexed | 2024-04-24T22:41:52Z |
format | Article |
id | doaj.art-b71d10e03cc7425787f6844767a740e6 |
institution | Directory Open Access Journal |
issn | 2666-6510 |
language | English |
last_indexed | 2024-04-24T22:41:52Z |
publishDate | 2024-01-01 |
publisher | KeAi Communications Co. Ltd. |
record_format | Article |
series | AI Open |
spelling | doaj.art-b71d10e03cc7425787f6844767a740e62024-03-19T04:19:17ZengKeAi Communications Co. Ltd.AI Open2666-65102024-01-0157986Adaptive negative representations for graph contrastive learningQi Zhang0Cheng Yang1Chuan Shi2Beijing University of Posts and Telecommunications, Beijing, ChinaBeijing University of Posts and Telecommunications, Beijing, ChinaCorresponding author.; Beijing University of Posts and Telecommunications, Beijing, ChinaGraph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL.http://www.sciencedirect.com/science/article/pii/S2666651023000219Graph neural networkContrastive learning |
spellingShingle | Qi Zhang Cheng Yang Chuan Shi Adaptive negative representations for graph contrastive learning AI Open Graph neural network Contrastive learning |
title | Adaptive negative representations for graph contrastive learning |
title_full | Adaptive negative representations for graph contrastive learning |
title_fullStr | Adaptive negative representations for graph contrastive learning |
title_full_unstemmed | Adaptive negative representations for graph contrastive learning |
title_short | Adaptive negative representations for graph contrastive learning |
title_sort | adaptive negative representations for graph contrastive learning |
topic | Graph neural network Contrastive learning |
url | http://www.sciencedirect.com/science/article/pii/S2666651023000219 |
work_keys_str_mv | AT qizhang adaptivenegativerepresentationsforgraphcontrastivelearning AT chengyang adaptivenegativerepresentationsforgraphcontrastivelearning AT chuanshi adaptivenegativerepresentationsforgraphcontrastivelearning |