Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models

19 Aug 2019  ·  Zhi-Xiu Ye, Qian Chen, Wen Wang, Zhen-Hua Ling ·

The state-of-the-art pre-trained language representation models, such as Bidirectional Encoder Representations from Transformers (BERT), rarely incorporate commonsense knowledge or other knowledge explicitly. We propose a pre-training approach for incorporating commonsense knowledge into language representation models. We construct a commonsense-related multi-choice question answering dataset for pre-training a neural language representation model. The dataset is created automatically by our proposed "align, mask, and select" (AMS) method. We also investigate different pre-training tasks. Experimental results demonstrate that pre-training models using the proposed approach followed by fine-tuning achieve significant improvements over previous state-of-the-art models on two commonsense-related benchmarks, including CommonsenseQA and Winograd Schema Challenge. We also observe that fine-tuned models after the proposed pre-training approach maintain comparable performance on other NLP tasks, such as sentence classification and natural language inference tasks, compared to the original BERT models. These results verify that the proposed approach, while significantly improving commonsense-related NLP tasks, does not degrade the general language representation capabilities.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Common Sense Reasoning CommonsenseQA BERT_CSlarge Accuracy 62.2 # 26

Methods