On compositions of transformations in contrastive self-supervised learning
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or disti...
Main Authors: | Yuki M. Asano, YM, Patrick, M, Kuznetsova, P, Fong, R, Henriques, JF, Zweig, G, Vedaldi, A |
---|---|
Format: | Conference item |
Language: | English |
Published: |
IEEE
2022
|
Similar Items
-
Labelling unlabelled videos from scratch with multi-modal self-supervision
by: Asano, YM, et al.
Published: (2020) -
A critical analysis of self-supervision, or what we can learn from a single image
by: Asano, YM, et al.
Published: (2020) -
Self-supervised and supervised contrastive learning
by: Tan, Alvin De Jun
Published: (2023) -
PASS: An ImageNet replacement for self-supervised pretraining without humans
by: Asano, YM, et al.
Published: (2021) -
Self-supervised contrastive video-speech representation learning for ultrasound
by: Jiao, J, et al.
Published: (2020)