POPQORN: Quantifying robustness of recurrent neural networks
The vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certifie...
Main Authors: | Weng, Tsui-Wei, Daniel, Luca |
---|---|
Other Authors: | MIT-IBM Watson AI Lab |
Format: | Article |
Language: | English |
Published: |
International Machine Learning Society
2021
|
Online Access: | https://hdl.handle.net/1721.1/130075 |
Similar Items
-
Efficient Neural Network Robustness Certification with General Activation Functions
by: Zhang, Huan, et al.
Published: (2021) -
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
by: Boopathy, Akhilan, et al.
Published: (2021) -
Towards verifying robustness of neural networks against a family of semantic perturbations
by: Mohapatra, Jeet, et al.
Published: (2021) -
ON EXTENSIONS OF CLEVER: A NEURAL NETWORK ROBUSTNESS EVALUATION ALGORITHM
by: Weng, Tsui-Wei, et al.
Published: (2021) -
ON EXTENSIONS OF CLEVER: A NEURAL NETWORK ROBUSTNESS EVALUATION ALGORITHM
by: Weng, Tsui-Wei, et al.
Published: (2022)