OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering BoolQ OPT 30B (0-shot) Accuracy 64 # 44
Question Answering BoolQ OPT-IML 175B Accuracy 71.4 # 37
Question Answering BoolQ OPT 175B Accuracy 60.1 # 52
Question Answering BoolQ OPT-IML 1.3B (0-shot) Accuracy 61.5 # 48
Question Answering BoolQ OPT-IML 30B Accuracy 66.9 # 40
Question Answering BoolQ OPT 1.3B (zero-shot) Accuracy 60.5 # 50
Natural Language Inference RTE OPT-IML 1.3B Accuracy 66.8% # 63
Natural Language Inference RTE OPT 175B Accuracy 60.3% # 72
Natural Language Inference RTE OPT-IML 30B Accuracy 83.8% # 31
Natural Language Inference RTE OPT 30B Accuracy 58.1% # 76
Natural Language Inference RTE OPT-IML 175B Accuracy 84.8% # 26
Natural Language Inference RTE OPT 1.3B Accuracy 54.2% # 85

Methods