DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning

Self-supervised learning (SSL) has gained remarkable success, for which contrastive learning (CL) plays a key role. However, the recent development of new non-CL frameworks has achieved comparable or better performance with high improvement potential, prompting researchers to enhance these framework...

Full description

Bibliographic Details
Main Authors: Thanh Nguyen, Trung Xuan Pham, Chaoning Zhang, Tung M. Luu, Thang Vu, Chang D. Yoo
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10014996/
_version_ 1827997215581274112
author Thanh Nguyen
Trung Xuan Pham
Chaoning Zhang
Tung M. Luu
Thang Vu
Chang D. Yoo
author_facet Thanh Nguyen
Trung Xuan Pham
Chaoning Zhang
Tung M. Luu
Thang Vu
Chang D. Yoo
author_sort Thanh Nguyen
collection DOAJ
description Self-supervised learning (SSL) has gained remarkable success, for which contrastive learning (CL) plays a key role. However, the recent development of new non-CL frameworks has achieved comparable or better performance with high improvement potential, prompting researchers to enhance these frameworks further. Assimilating CL into non-CL frameworks has been thought to be beneficial, but empirical evidence indicates no visible improvements. In view of that, this paper proposes a strategy of performing CL along the dimensional direction instead of along the batch direction as done in conventional contrastive learning, named Dimensional Contrastive Learning (DimCL). DimCL aims to enhance the feature diversity, and it can serve as a regularizer to prior SSL frameworks. DimCL has been found to be effective, and the hardness-aware property is identified as a critical reason for its success. Extensive experimental results reveal that assimilating DimCL into SSL frameworks leads to performance improvement by a non-trivial margin on various datasets and backbone architectures.
first_indexed 2024-04-10T05:24:55Z
format Article
id doaj.art-88abdffabf074257b80049b76d41b509
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-10T05:24:55Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-88abdffabf074257b80049b76d41b5092023-03-08T00:00:42ZengIEEEIEEE Access2169-35362023-01-0111215342154510.1109/ACCESS.2023.323608710014996DimCL: Dimensional Contrastive Learning for Improving Self-Supervised LearningThanh Nguyen0https://orcid.org/0000-0003-3533-4054Trung Xuan Pham1https://orcid.org/0000-0003-4177-7054Chaoning Zhang2Tung M. Luu3https://orcid.org/0000-0001-9488-7463Thang Vu4https://orcid.org/0000-0003-0486-6349Chang D. Yoo5https://orcid.org/0000-0002-0756-7179School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaSchool of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaSchool of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaSchool of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaSchool of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaSchool of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaSelf-supervised learning (SSL) has gained remarkable success, for which contrastive learning (CL) plays a key role. However, the recent development of new non-CL frameworks has achieved comparable or better performance with high improvement potential, prompting researchers to enhance these frameworks further. Assimilating CL into non-CL frameworks has been thought to be beneficial, but empirical evidence indicates no visible improvements. In view of that, this paper proposes a strategy of performing CL along the dimensional direction instead of along the batch direction as done in conventional contrastive learning, named Dimensional Contrastive Learning (DimCL). DimCL aims to enhance the feature diversity, and it can serve as a regularizer to prior SSL frameworks. DimCL has been found to be effective, and the hardness-aware property is identified as a critical reason for its success. Extensive experimental results reveal that assimilating DimCL into SSL frameworks leads to performance improvement by a non-trivial margin on various datasets and backbone architectures.https://ieeexplore.ieee.org/document/10014996/Self-supervise learningcomputer visioncontrastive learningdeep learningtransfer learning
spellingShingle Thanh Nguyen
Trung Xuan Pham
Chaoning Zhang
Tung M. Luu
Thang Vu
Chang D. Yoo
DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
IEEE Access
Self-supervise learning
computer vision
contrastive learning
deep learning
transfer learning
title DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
title_full DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
title_fullStr DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
title_full_unstemmed DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
title_short DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
title_sort dimcl dimensional contrastive learning for improving self supervised learning
topic Self-supervise learning
computer vision
contrastive learning
deep learning
transfer learning
url https://ieeexplore.ieee.org/document/10014996/
work_keys_str_mv AT thanhnguyen dimcldimensionalcontrastivelearningforimprovingselfsupervisedlearning
AT trungxuanpham dimcldimensionalcontrastivelearningforimprovingselfsupervisedlearning
AT chaoningzhang dimcldimensionalcontrastivelearningforimprovingselfsupervisedlearning
AT tungmluu dimcldimensionalcontrastivelearningforimprovingselfsupervisedlearning
AT thangvu dimcldimensionalcontrastivelearningforimprovingselfsupervisedlearning
AT changdyoo dimcldimensionalcontrastivelearningforimprovingselfsupervisedlearning