Adversarial Training for Aspect-Based Sentiment Analysis with BERT

30 Jan 2020  ·  Akbar Karimi, Leonardo Rossi, Andrea Prati ·

Aspect-Based Sentiment Analysis (ABSA) deals with the extraction of sentiments and their targets. Collecting labeled data for this task in order to help neural networks generalize better can be laborious and time-consuming. As an alternative, similar data to the real-world examples can be produced artificially through an adversarial process which is carried out in the embedding space. Although these examples are not real sentences, they have been shown to act as a regularization method which can make neural networks more robust. In this work, we apply adversarial training, which was put forward by Goodfellow et al. (2014), to the post-trained BERT (BERT-PT) language model proposed by Xu et al. (2019) on the two major tasks of Aspect Extraction and Aspect Sentiment Classification in sentiment analysis. After improving the results of post-trained BERT by an ablation study, we propose a novel architecture called BERT Adversarial Training (BAT) to utilize adversarial training in ABSA. The proposed model outperforms post-trained BERT in both tasks. To the best of our knowledge, this is the first study on the application of adversarial training in ABSA.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Aspect Extraction SemEval-2014 Task-4 BAT Laptop (F1) 85.57 # 4
Mean F1 (Laptop + Restaurant) 83.54 # 3
Restaurant (F1) 81.50 # 6
Aspect-Based Sentiment Analysis (ABSA) SemEval-2014 Task-4 BAT Restaurant (Acc) 86.03 # 11
Laptop (Acc) 79.35 # 12
Mean Acc (Restaurant + Laptop) 82.69 # 10

Methods