no code implementations • 29 Feb 2024 • Fangyuan Zhang, Huichi Zhou, Shuangjiao Li, Hongtao Wang
Deep neural networks have been proven to be vulnerable to adversarial examples and various methods have been proposed to defend against adversarial attacks for natural language processing tasks.