no code implementations • EMNLP 2021 • Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, Xipeng Qiu
\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers.
1 code implementation • EMNLP 2021 • Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, Cho-Jui Hsieh
Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
1 code implementation • 8 May 2021 • Jiehang Zeng, Xiaoqing Zheng, Jianhan Xu, Linyang Li, Liping Yuan, Xuanjing Huang
Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions.
no code implementations • 22 Mar 2021 • Liping Yuan, Jiehang Zeng, Xiaoqing Zheng
It is still a challenging task to learn a neural text generation model under the framework of generative adversarial networks (GANs) since the entire training process is not differentiable.
no code implementations • ACL 2020 • Xiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, Xuanjing Huang
Despite achieving prominent performance on many important tasks, it has been reported that neural networks are vulnerable to adversarial examples.
no code implementations • 15 Apr 2020 • Jiehang Zeng, Lu Liu, Xiaoqing Zheng
A generative network (GN) takes two elements of a (subject, predicate, object) triple as input and generates the vector representation of the missing element.