Assessing robustness of text classification through maximal safe radius computation
Neural network NLP models are vulnerable to small modifications of the input that maintain the original meaning but result in a different prediction. In this paper, we focus on robustness of text classification against word substitutions, aiming to provide guarantees that the model prediction does n...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Association for Computational Linguistics
2020
|