We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentiment Analysis CR USE_T+CNN (w2v w.e.) Accuracy 87.45 # 6
Sentiment Analysis MPQA USE_T+DAN (w2v w.e.) Accuracy 88.14 # 4
Sentiment Analysis MR USE_T+CNN Accuracy 81.59 # 9
Conversational Response Selection PolyAI Reddit USE 1-of-100 Accuracy 47.7% # 4
Sentiment Analysis SST-2 Binary classification USE_T+CNN (lrn w.e.) Accuracy 87.21 # 72
Semantic Textual Similarity STS Benchmark USE_T Pearson Correlation 0.782 # 29
Subjectivity Analysis SUBJ USE Accuracy 93.90 # 10
Text Classification TREC-6 USE_T+CNN Error 1.93 # 2

Methods


No methods listed for this paper. Add relevant methods here