Purify unlearnable examples via rate-constrained variational autoencoders
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled. Defenses against these poisoning attacks can be categorized based on whether specific interventions are adopted during training. The first approach is training-ti...
Main Authors: | Yu, Yi, Wang, Yufei, Xia, Song, Yang, Wenhan, Lu, Shijian, Tan, Yap Peng, Kot, Alex Chichung |
---|---|
Other Authors: | Interdisciplinary Graduate School (IGS) |
Format: | Conference Paper |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/178531 https://proceedings.mlr.press/v235/ https://icml.cc/ |
Similar Items
-
Semantic deep hiding for robust unlearnable examples
by: Meng, Ruohan, et al.
Published: (2024) -
Unlearnable example with face images
by: Peng, Haohang
Published: (2025) -
Towards efficient and certified recovery from poisoning attacks in federated learning
by: Jiang, Yu, et al.
Published: (2025) -
Data protection with unlearnable examples
by: Ma, Xiaoyu
Published: (2024) -
SPFL: a self-purified federated learning method against poisoning attacks
by: Liu, Zizhen, et al.
Published: (2024)