Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-level Backdoor Attacks
The pre-training-then-fine-tuning paradigm has been widely used in deep learning. Due to the huge computation cost for pre-training, practitioners usually download pre-trained models from the Internet and fine-tune them on downstream datasets, while the downloaded models may suffer backdoor attacks....
Main Authors: | , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer Science and Business Media LLC
2024
|
Online Access: | https://hdl.handle.net/1721.1/155692 |