|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
#2 best model for Semantic Textual Similarity on STS Benchmark
In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks.
SOTA for Linguistic Acceptability on CoLA