In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding.
The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence.
The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector's dimensionality on the resulting representations.
Despite the fast developmental pace of new sentence embedding methods, it is still challenging to find comprehensive evaluations of these different techniques.
Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.
Ranked #33 on Natural Language Inference on SNLI
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
Here, we generalize the concept of average word embeddings to power mean word embeddings.
Yet, it is an open problem to generate a high quality sentence representation from BERT-based word models.
We can show that the sentence embeddings learned in this way can be utilized in a wide variety of transfer learning tasks, outperforming InferSent on 7 out of 10 and SkipThought on 8 out of 9 SentEval sentence embedding evaluation tasks.
Ranked #4 on Natural Language Inference on SciTail