As firm as their foundations: can open-sourced foundation models be used to create adversarial examples for downstream tasks?

Foundation models pre-trained on web-scale vision-language data, such as CLIP, are widely used as cornerstones of powerful machine learning systems. While pre-training offers clear advantages for downstream learning, it also endows downstream models with shared adversarial vulnerabilities that can b...

詳細記述

書誌詳細
主要な著者: Hu, A, Gu, J, Pinto, F, Kamnitsas, K, Torr, P
フォーマット: Internet publication
言語:English
出版事項: 2024