Adversarial masking for self-supervised learning

We propose ADIOS, a masked image modeling (MIM) framework for self-supervised learning, which simultaneously learns a masking function and an image encoder using an adversarial objective. The image encoder is trained to minimise the distance between representations of the original and that of a mask...

Full description

Bibliographic Details
Main Authors: Shi, Y, Siddharth, N, Torr, PHS, Kosiorek, AR
Format: Conference item
Language:English
Published: Journal of Machine Learning Research 2022
_version_ 1826310574666940416
author Shi, Y
Siddharth, N
Torr, PHS
Kosiorek, AR
author_facet Shi, Y
Siddharth, N
Torr, PHS
Kosiorek, AR
author_sort Shi, Y
collection OXFORD
description We propose ADIOS, a masked image modeling (MIM) framework for self-supervised learning, which simultaneously learns a masking function and an image encoder using an adversarial objective. The image encoder is trained to minimise the distance between representations of the original and that of a masked image. The masking function, conversely, aims at maximising this distance. ADIOS consistently improves on state-ofthe-art self-supervised learning (SSL) methods on a variety of tasks and datasets-including classification on ImageNet100 and STL10, transfer learning on CIFAR10/100, Flowers102 and iNaturalist, as well as robustness evaluated on the backgrounds challenge (Xiao et al., 2021)-while generating semantically meaningful masks. Unlike modern MIM models such as MAE, BEiT and iBOT, ADIOS does not rely on the image-patch tokenisation construction of Vision Transformers, and can be implemented with convolutional backbones. We further demonstrate that the masks learned by ADIOS are more effective in improving representation learning of SSL methods than masking schemes used in popular MIM models.
first_indexed 2024-03-07T07:53:58Z
format Conference item
id oxford-uuid:103e0dec-e729-4366-9041-c4c1fbf0dacc
institution University of Oxford
language English
last_indexed 2024-03-07T07:53:58Z
publishDate 2022
publisher Journal of Machine Learning Research
record_format dspace
spelling oxford-uuid:103e0dec-e729-4366-9041-c4c1fbf0dacc2023-08-10T11:07:44ZAdversarial masking for self-supervised learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:103e0dec-e729-4366-9041-c4c1fbf0daccEnglishSymplectic ElementsJournal of Machine Learning Research2022Shi, YSiddharth, NTorr, PHSKosiorek, ARWe propose ADIOS, a masked image modeling (MIM) framework for self-supervised learning, which simultaneously learns a masking function and an image encoder using an adversarial objective. The image encoder is trained to minimise the distance between representations of the original and that of a masked image. The masking function, conversely, aims at maximising this distance. ADIOS consistently improves on state-ofthe-art self-supervised learning (SSL) methods on a variety of tasks and datasets-including classification on ImageNet100 and STL10, transfer learning on CIFAR10/100, Flowers102 and iNaturalist, as well as robustness evaluated on the backgrounds challenge (Xiao et al., 2021)-while generating semantically meaningful masks. Unlike modern MIM models such as MAE, BEiT and iBOT, ADIOS does not rely on the image-patch tokenisation construction of Vision Transformers, and can be implemented with convolutional backbones. We further demonstrate that the masks learned by ADIOS are more effective in improving representation learning of SSL methods than masking schemes used in popular MIM models.
spellingShingle Shi, Y
Siddharth, N
Torr, PHS
Kosiorek, AR
Adversarial masking for self-supervised learning
title Adversarial masking for self-supervised learning
title_full Adversarial masking for self-supervised learning
title_fullStr Adversarial masking for self-supervised learning
title_full_unstemmed Adversarial masking for self-supervised learning
title_short Adversarial masking for self-supervised learning
title_sort adversarial masking for self supervised learning
work_keys_str_mv AT shiy adversarialmaskingforselfsupervisedlearning
AT siddharthn adversarialmaskingforselfsupervisedlearning
AT torrphs adversarialmaskingforselfsupervisedlearning
AT kosiorekar adversarialmaskingforselfsupervisedlearning