Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling.
SOTA for Language Modelling on Hutter Prize
Konv enables a very challenging task as the model needs to both understand dialogue and plan over the given knowledge graph.
Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora.
#2 best model for Cross-Lingual Bitext Mining on BUCC German-to-English
The versatile toolkit also fosters technique sharing across different text generation tasks.
Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data.
In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks.
SOTA for Natural Language Inference on SNLI
The Transformer is a sequence model that forgoes traditional recurrent architectures in favor of a fully attention-based approach.
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.
SOTA for Relation Extraction on FewRel
Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling.