PCA as a defense against some adversaries
Neural network classifiers are known to be highly vulnerable to adversarial perturbations in their inputs. Under the hypothesis that adversarial examples lie outside of the sub-manifold of natural images, previous work has investigated the impact of principal components in data on adversarial robust...
Main Authors: | , , |
---|---|
Format: | Article |
Published: |
Center for Brains, Minds and Machines (CBMM)
2022
|
Online Access: | https://hdl.handle.net/1721.1/141424 |