On compositions of transformations in contrastive self-supervised learning
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or disti...
Asıl Yazarlar: | Yuki M. Asano, YM, Patrick, M, Kuznetsova, P, Fong, R, Henriques, JF, Zweig, G, Vedaldi, A |
---|---|
Materyal Türü: | Conference item |
Dil: | English |
Baskı/Yayın Bilgisi: |
IEEE
2022
|
Benzer Materyaller
-
Labelling unlabelled videos from scratch with multi-modal self-supervision
Yazar:: Asano, YM, ve diğerleri
Baskı/Yayın Bilgisi: (2020) -
A critical analysis of self-supervision, or what we can learn from a single image
Yazar:: Asano, YM, ve diğerleri
Baskı/Yayın Bilgisi: (2020) -
PASS: An ImageNet replacement for self-supervised pretraining without humans
Yazar:: Asano, YM, ve diğerleri
Baskı/Yayın Bilgisi: (2021) -
Self-supervised and supervised contrastive learning
Yazar:: Tan, Alvin De Jun
Baskı/Yayın Bilgisi: (2023) -
Investigating Contrastive Pair Learning’s Frontiers in Supervised, Semisupervised, and Self-Supervised Learning
Yazar:: Bihi Sabiri, ve diğerleri
Baskı/Yayın Bilgisi: (2024-08-01)