On pretraining data diversity for self-supervised learning
We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performa...
Asıl Yazarlar: | , , , , , |
---|---|
Materyal Türü: | Conference item |
Dil: | English |
Baskı/Yayın Bilgisi: |
Springer
2024
|
_version_ | 1826316100322721792 |
---|---|
author | Hammoud, HAAK Das, T Pizzati, F Torr, P Bibi, A Ghanem, B |
author_facet | Hammoud, HAAK Das, T Pizzati, F Torr, P Bibi, A Ghanem, B |
author_sort | Hammoud, HAAK |
collection | OXFORD |
description | We explore the impact of training with more diverse
datasets, characterized by the number of unique samples, on the performance of self-supervised learning
(SSL) under a fixed computational budget. Our findings
consistently demonstrate that increasing pretraining
data diversity enhances SSL performance, albeit only
when the distribution distance to the downstream data
is minimal. Notably, even with an exceptionally large
pretraining data diversity achieved through methods like
web crawling or diffusion-generated data, among other
ways, the distribution shift remains a challenge. Our
experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and
YFCC100M amounting to over 200 GPU days. The
code and trained models will be available at https:
//github.com/hammoudhasan/DiversitySSL. |
first_indexed | 2024-09-25T04:14:21Z |
format | Conference item |
id | oxford-uuid:bea9ad90-78a1-4814-ab3d-906678494f11 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:39:46Z |
publishDate | 2024 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:bea9ad90-78a1-4814-ab3d-906678494f112024-12-05T11:54:50ZOn pretraining data diversity for self-supervised learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:bea9ad90-78a1-4814-ab3d-906678494f11EnglishSymplectic ElementsSpringer2024Hammoud, HAAKDas, TPizzati, FTorr, PBibi, AGhanem, BWe explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget. Our findings consistently demonstrate that increasing pretraining data diversity enhances SSL performance, albeit only when the distribution distance to the downstream data is minimal. Notably, even with an exceptionally large pretraining data diversity achieved through methods like web crawling or diffusion-generated data, among other ways, the distribution shift remains a challenge. Our experiments are comprehensive with seven SSL methods using large-scale datasets such as ImageNet and YFCC100M amounting to over 200 GPU days. The code and trained models will be available at https: //github.com/hammoudhasan/DiversitySSL. |
spellingShingle | Hammoud, HAAK Das, T Pizzati, F Torr, P Bibi, A Ghanem, B On pretraining data diversity for self-supervised learning |
title | On pretraining data diversity for self-supervised learning |
title_full | On pretraining data diversity for self-supervised learning |
title_fullStr | On pretraining data diversity for self-supervised learning |
title_full_unstemmed | On pretraining data diversity for self-supervised learning |
title_short | On pretraining data diversity for self-supervised learning |
title_sort | on pretraining data diversity for self supervised learning |
work_keys_str_mv | AT hammoudhaak onpretrainingdatadiversityforselfsupervisedlearning AT dast onpretrainingdatadiversityforselfsupervisedlearning AT pizzatif onpretrainingdatadiversityforselfsupervisedlearning AT torrp onpretrainingdatadiversityforselfsupervisedlearning AT bibia onpretrainingdatadiversityforselfsupervisedlearning AT ghanemb onpretrainingdatadiversityforselfsupervisedlearning |