As firm as their foundations: can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Foundation models pre-trained on web-scale vision-language data, such as CLIP, are widely used as cornerstones of powerful machine learning systems. While pre-training offers clear advantages for downstream learning, it also endows downstream models with shared adversarial vulnerabilities that can b...
Main Authors: | , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
2024
|