Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks.
Ranked #22 on Named Entity Recognition on CoNLL 2003 (English)
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
Ranked #1 on Open-Domain Question Answering on SQuAD1.1
End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors.
The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence.
Ranked #8 on Machine Translation on IWSLT2015 German-English
We describe an open-source toolkit for neural machine translation (NMT).
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
Ranked #7 on Text Summarization on PubMed
We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations.
Ranked #2 on Predicate Detection on CoNLL 2005
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base.