On compositions of transformations in contrastive self-supervised learning
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or disti...
मुख्य लेखकों: | Yuki M. Asano, YM, Patrick, M, Kuznetsova, P, Fong, R, Henriques, JF, Zweig, G, Vedaldi, A |
---|---|
स्वरूप: | Conference item |
भाषा: | English |
प्रकाशित: |
IEEE
2022
|
समान संसाधन
-
Labelling unlabelled videos from scratch with multi-modal self-supervision
द्वारा: Asano, YM, और अन्य
प्रकाशित: (2020) -
A critical analysis of self-supervision, or what we can learn from a single image
द्वारा: Asano, YM, और अन्य
प्रकाशित: (2020) -
PASS: An ImageNet replacement for self-supervised pretraining without humans
द्वारा: Asano, YM, और अन्य
प्रकाशित: (2021) -
Self-supervised and supervised contrastive learning
द्वारा: Tan, Alvin De Jun
प्रकाशित: (2023) -
Investigating Contrastive Pair Learning’s Frontiers in Supervised, Semisupervised, and Self-Supervised Learning
द्वारा: Bihi Sabiri, और अन्य
प्रकाशित: (2024-08-01)