Certified Robustness to Text Adversarial Attacks by Randomized [MASK]

Very recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions. However, all the existing certified defense methods assume that the defenders have been informed of how the adversaries generate synonyms,...

Full description

Bibliographic Details
Main Authors: Jiehang Zeng, Jianhan Xu, Xiaoqing Zheng, Xuanjing Huang
Format: Article
Language:English
Published: The MIT Press 2023-06-01
Series:Computational Linguistics
Online Access:http://dx.doi.org/10.1162/coli_a_00476
Description
Summary:Very recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions. However, all the existing certified defense methods assume that the defenders have been informed of how the adversaries generate synonyms, which is not a realistic scenario. In this study, we propose a certifiably robust defense method by randomly masking a certain proportion of the words in an input text, in which the above unrealistic assumption is no longer necessary. The proposed method can defend against not only word substitution-based attacks, but also character-level perturbations. We can certify the classifications of over 50% of texts to be robust to any perturbation of five words on AGNEWS, and two words on SST2 dataset. The experimental results show that our randomized smoothing method significantly outperforms recently proposed defense methods across multiple datasets under different attack algorithms.
ISSN:1530-9312