Scaling Sentence Embeddings with Large Language Models

31 Jul 2023  ·  Ting Jiang, Shaohan Huang, Zhongzhi Luan, Deqing Wang, Fuzhen Zhuang ·

Large language models (LLMs) have recently garnered significant interest. With in-context learning, LLMs achieve impressive results in various natural language tasks. However, the application of LLMs to sentence embeddings remains an area of ongoing research. In this work, we propose an in-context learning-based method aimed at improving sentence embeddings performance. Our approach involves adapting the previous prompt-based representation method for autoregressive models, constructing a demonstration set that enables LLMs to perform in-context learning, and scaling up the LLMs to different model sizes. Through extensive experiments, in-context learning enables LLMs to generate high-quality sentence embeddings without any fine-tuning. It helps LLMs achieve performance comparable to current contrastive learning methods. By scaling model size, we find scaling to more than tens of billion parameters harms the performance on semantic textual similarity (STS) tasks. However, the largest model outperforms other counterparts and achieves the new state-of-the-art result on transfer tasks. We also fine-tune LLMs with current contrastive learning approach, and the 2.7B OPT model, incorporating our prompt-based method, surpasses the performance of 4.8B ST5, achieving the new state-of-the-art results on STS tasks. Our code is available at https://github.com/kongds/scaling_sentemb.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Semantic Textual Similarity SICK PromptEOL+CSE+OPT-13B Spearman Correlation 0.8206 # 3
Semantic Textual Similarity SICK PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.8129 # 5
Semantic Textual Similarity SICK PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.8238 # 2
Semantic Textual Similarity STS12 PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.7972 # 2
Semantic Textual Similarity STS12 PromptEOL+CSE+OPT-13B Spearman Correlation 0.8020 # 1
Semantic Textual Similarity STS12 PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.7949 # 4
Semantic Textual Similarity STS13 PromptEOL+CSE+OPT-13B Spearman Correlation 0.9024 # 5
Semantic Textual Similarity STS13 PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.9025 # 4
Semantic Textual Similarity STS13 PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.8964 # 6
Semantic Textual Similarity STS14 PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.8585 # 2
Semantic Textual Similarity STS14 PromptEOL+CSE+OPT-13B Spearman Correlation 0.8534 # 5
Semantic Textual Similarity STS14 PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.8480 # 6
Semantic Textual Similarity STS15 PromptEOL+CSE+OPT-13B Spearman Correlation 0.8952 # 4
Semantic Textual Similarity STS15 PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.8951 # 5
Semantic Textual Similarity STS15 PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.9004 # 2
Semantic Textual Similarity STS16 PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.8627 # 4
Semantic Textual Similarity STS16 PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.8591 # 5
Semantic Textual Similarity STS16 PromptEOL+CSE+OPT-13B Spearman Correlation 0.8590 # 6
Semantic Textual Similarity STS Benchmark PromptEOL+CSE+OPT-13B Spearman Correlation 0.8856 # 13
Semantic Textual Similarity STS Benchmark PromptEOL+CSE+OPT-2.7B Spearman Correlation 0.8833 # 14
Semantic Textual Similarity STS Benchmark PromptEOL+CSE+LLaMA-30B Spearman Correlation 0.8914 # 9

Methods