InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Large-scale language models such as BERT have achieved state-of-the-art performance across a wide range of NLP tasks. Recent studies, however, show that such BERT-based models are vulnerable facing the threats of textual adversarial attacks... (read more)

PDF Abstract ICLR 2021 PDF (under review) ICLR 2021 Abstract (under review)

Results from the Paper


 Ranked #1 on Natural Language Inference on ANLI test (using extra training data)

     Get a GitHub badge
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Natural Language Inference ANLI test InfoBERT (RoBERTa) A1 75 # 1
A2 50.5 # 3
A3 47.7 # 3
ANLI 58.3 # 1

Methods used in the Paper