On compositions of transformations in contrastive self-supervised learning
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or disti...
Үндсэн зохиолчид: | Yuki M. Asano, YM, Patrick, M, Kuznetsova, P, Fong, R, Henriques, JF, Zweig, G, Vedaldi, A |
---|---|
Формат: | Conference item |
Хэл сонгох: | English |
Хэвлэсэн: |
IEEE
2022
|
Ижил төстэй зүйлс
Ижил төстэй зүйлс
-
Labelling unlabelled videos from scratch with multi-modal self-supervision
-н: Asano, YM, зэрэг
Хэвлэсэн: (2020) -
A critical analysis of self-supervision, or what we can learn from a single image
-н: Asano, YM, зэрэг
Хэвлэсэн: (2020) -
PASS: An ImageNet replacement for self-supervised pretraining without humans
-н: Asano, YM, зэрэг
Хэвлэсэн: (2021) -
Self-supervised and supervised contrastive learning
-н: Tan, Alvin De Jun
Хэвлэсэн: (2023) -
Investigating Contrastive Pair Learning’s Frontiers in Supervised, Semisupervised, and Self-Supervised Learning
-н: Bihi Sabiri, зэрэг
Хэвлэсэн: (2024-08-01)