Multi-Task Deep Neural Networks for Natural Language Understanding

ACL 2019 Xiaodong LiuPengcheng HeWeizhu ChenJianfeng Gao

In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains... (read more)

PDF Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Linguistic Acceptability CoLA MT-DNN Accuracy 68.4% # 4
Natural Language Inference MultiNLI MT-DNN Matched 86.7 # 10
Mismatched 86.0 # 7
Paraphrase Identification Quora Question Pairs MT-DNN Accuracy 89.6 # 1
F1 72.4 # 1
Natural Language Inference SciTail MT-DNN Accuracy 94.1 # 1
Natural Language Inference SNLI Ntumpha % Test Accuracy 90.5 # 4
% Train Accuracy 99.1 # 2
Parameters 220 # 1
Natural Language Inference SNLI MT-DNN % Test Accuracy 91.6 # 2
% Train Accuracy 97.2 # 4
Parameters 330m # 2
Sentiment Analysis SST-2 Binary classification MT-DNN Accuracy 95.6 # 7

Methods used in the Paper