A Survey on Contrastive Self-Supervised Learning

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has rece...

Full description

Bibliographic Details
Main Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, Fillia Makedon
Format: Article
Language:English
Published: MDPI AG 2020-12-01
Series:Technologies
Subjects:
Online Access:https://www.mdpi.com/2227-7080/9/1/2
_version_ 1797543336024735744
author Ashish Jaiswal
Ashwin Ramesh Babu
Mohammad Zaki Zadeh
Debapriya Banerjee
Fillia Makedon
author_facet Ashish Jaiswal
Ashwin Ramesh Babu
Mohammad Zaki Zadeh
Debapriya Banerjee
Fillia Makedon
author_sort Ashish Jaiswal
collection DOAJ
description Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.
first_indexed 2024-03-10T13:44:06Z
format Article
id doaj.art-f5e61d6454e74c808b80fbf70c0cffee
institution Directory Open Access Journal
issn 2227-7080
language English
last_indexed 2024-03-10T13:44:06Z
publishDate 2020-12-01
publisher MDPI AG
record_format Article
series Technologies
spelling doaj.art-f5e61d6454e74c808b80fbf70c0cffee2023-11-21T02:49:15ZengMDPI AGTechnologies2227-70802020-12-0191210.3390/technologies9010002A Survey on Contrastive Self-Supervised LearningAshish Jaiswal0Ashwin Ramesh Babu1Mohammad Zaki Zadeh2Debapriya Banerjee3Fillia Makedon4Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USADepartment of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019, USASelf-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.https://www.mdpi.com/2227-7080/9/1/2contrastive learningself-supervised learningdiscriminative learningimage/video classificationobject detectionunsupervised learning
spellingShingle Ashish Jaiswal
Ashwin Ramesh Babu
Mohammad Zaki Zadeh
Debapriya Banerjee
Fillia Makedon
A Survey on Contrastive Self-Supervised Learning
Technologies
contrastive learning
self-supervised learning
discriminative learning
image/video classification
object detection
unsupervised learning
title A Survey on Contrastive Self-Supervised Learning
title_full A Survey on Contrastive Self-Supervised Learning
title_fullStr A Survey on Contrastive Self-Supervised Learning
title_full_unstemmed A Survey on Contrastive Self-Supervised Learning
title_short A Survey on Contrastive Self-Supervised Learning
title_sort survey on contrastive self supervised learning
topic contrastive learning
self-supervised learning
discriminative learning
image/video classification
object detection
unsupervised learning
url https://www.mdpi.com/2227-7080/9/1/2
work_keys_str_mv AT ashishjaiswal asurveyoncontrastiveselfsupervisedlearning
AT ashwinrameshbabu asurveyoncontrastiveselfsupervisedlearning
AT mohammadzakizadeh asurveyoncontrastiveselfsupervisedlearning
AT debapriyabanerjee asurveyoncontrastiveselfsupervisedlearning
AT filliamakedon asurveyoncontrastiveselfsupervisedlearning
AT ashishjaiswal surveyoncontrastiveselfsupervisedlearning
AT ashwinrameshbabu surveyoncontrastiveselfsupervisedlearning
AT mohammadzakizadeh surveyoncontrastiveselfsupervisedlearning
AT debapriyabanerjee surveyoncontrastiveselfsupervisedlearning
AT filliamakedon surveyoncontrastiveselfsupervisedlearning