Universal Evasion Attacks on Summarization Scoring

25 Oct 2022  ·  Wenchuan Mu, Kwan Hui Lim ·

The automatic scoring of summaries is important as it guides the development of summarizers. Scoring is also complex, as it involves multiple aspects such as fluency, grammar, and even textual entailment with the source text. However, summary scoring has not been considered a machine learning task to study its accuracy and robustness. In this study, we place automatic scoring in the context of regression machine learning tasks and perform evasion attacks to explore its robustness. Attack systems predict a non-summary string from each input, and these non-summary strings achieve competitive scores with good summarizers on the most popular metrics: ROUGE, METEOR, and BERTScore. Attack systems also "outperform" state-of-the-art summarization methods on ROUGE-1 and ROUGE-L, and score the second-highest on METEOR. Furthermore, a BERTScore backdoor is observed: a simple trigger can score higher than any automatic summarization method. The evasion attacks in this work indicate the low robustness of current scoring systems at the system level. We hope that our highlighting of these proposed attacks will facilitate the development of summary scores.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Document Summarization CNN / Daily Mail Scrambled code + broken (alter) ROUGE-1 48.18 # 1
ROUGE-2 19.84 # 13
ROUGE-L 45.35 # 1
Abstractive Text Summarization CNN / Daily Mail Scrambled code + broken (alter) ROUGE-1 48.18 # 2
ROUGE-2 19.84 # 25
ROUGE-L 45.35 # 2
Abstractive Text Summarization CNN / Daily Mail Scrambled code + broken ROUGE-1 46.71 # 5
ROUGE-2 20.39 # 23
ROUGE-L 43.56 # 5

Methods


No methods listed for this paper. Add relevant methods here