Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
Ranked #2 on Multimodal Machine Translation on Multi30K (BLUE (DE-EN) metric)
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Ranked #1 on Named Entity Recognition on CoNLL 2003 (English)
Neural machine translation is a recently proposed approach to machine translation.
Ranked #4 on Dialogue Generation on Persona-Chat (using extra training data)
In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN).
Ranked #47 on Machine Translation on WMT2014 English-French
This paper explores a simple and efficient baseline for text classification.
Ranked #1 on Sentiment Analysis on Sogou News
Emotion Recognition in Conversation General Classification +2
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality.
Ranked #1 on Question Quality Assessment on CrowdSource QA
We propose two novel model architectures for computing continuous vector representations of words from very large data sets.