On compositions of transformations in contrastive self-supervised learning
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or disti...
Main Authors: | Yuki M. Asano, YM, Patrick, M, Kuznetsova, P, Fong, R, Henriques, JF, Zweig, G, Vedaldi, A |
---|---|
Format: | Conference item |
Jezik: | English |
Izdano: |
IEEE
2022
|
Podobne knjige/članki
-
Labelling unlabelled videos from scratch with multi-modal self-supervision
od: Asano, YM, et al.
Izdano: (2020) -
A critical analysis of self-supervision, or what we can learn from a single image
od: Asano, YM, et al.
Izdano: (2020) -
PASS: An ImageNet replacement for self-supervised pretraining without humans
od: Asano, YM, et al.
Izdano: (2021) -
Self-supervised and supervised contrastive learning
od: Tan, Alvin De Jun
Izdano: (2023) -
Investigating Contrastive Pair Learning’s Frontiers in Supervised, Semisupervised, and Self-Supervised Learning
od: Bihi Sabiri, et al.
Izdano: (2024-08-01)