Self-Supervised Visual Representation Learning via Residual Momentum
Self-supervised learning (SSL) has emerged as a promising approach for learning representations from unlabeled data. Momentum-based contrastive frameworks such as MoCo-v3 have shown remarkable success among the many SSL methods proposed in recent years. However, a significant gap in encoder represen...
Main Authors: | Trung Xuan Pham, Axi Niu, Kang Zhang, Tee Joshua Tian Jin, Ji Woo Hong, Chang D. Yoo |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10287941/ |
Similar Items
-
Joint data and feature augmentation for self-supervised representation learning on point clouds
by: Zhuheng Lu, et al.
Published: (2023-10-01) -
MoCoUTRL: a momentum contrastive framework for unsupervised text representation learning
by: Ao Zou, et al.
Published: (2023-12-01) -
HistoSSL: Self-Supervised Representation Learning for Classifying Histopathology Images
by: Xu Jin, et al.
Published: (2022-12-01) -
Self-Supervised Action Representation Learning Based on Asymmetric Skeleton Data Augmentation
by: Hualing Zhou, et al.
Published: (2022-11-01) -
DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
by: Thanh Nguyen, et al.
Published: (2023-01-01)