SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

ACL 2020 Mao YeChengyue GongQiang Liu

State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution... (read more)

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper