Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder

6 Oct 2020 Alvin Chan Yi Tay Yew-Soon Ong Aston Zhang

This paper demonstrates a fatal vulnerability in natural language inference (NLI) and text classification systems. More concretely, we present a 'backdoor poisoning' attack on NLP models... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper