Sentence Completion

45 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Sentence Completion models and implementations

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

kaistai/cot-collection 23 May 2023

Furthermore, we show that instruction tuning with CoT Collection allows LMs to possess stronger few-shot learning capabilities on 4 domain-specific tasks, resulting in an improvement of +2. 24% (Flan-T5 3B) and +2. 37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until the max length by a +13. 98% margin.

190
23 May 2023

PaLM 2 Technical Report

eternityyw/tram-benchmark 17 May 2023

Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM.

12
17 May 2023

LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions

mbzuai-nlp/lamini-lm 27 Apr 2023

The results demonstrate that our proposed LaMini-LM models are comparable to competitive baselines, while being much smaller in size.

801
27 Apr 2023

GPT-4 Technical Report

openai/evals Preprint 2023

We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.

13,947
15 Mar 2023

LLaMA: Open and Efficient Foundation Language Models

huggingface/transformers arXiv 2023

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters.

125,425
27 Feb 2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

joeljang/elm 7 Feb 2023

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

96
07 Feb 2023

Crosslingual Generalization through Multitask Finetuning

bigscience-workshop/xmtf 3 Nov 2022

We find finetuning large multilingual language models on English tasks with English prompts allows for task generalization to non-English languages that appear only in the pretraining corpus.

495
03 Nov 2022

Two is Better than Many? Binary Classification as an Effective Approach to Multi-Choice Question Answering

declare-lab/team 29 Oct 2022

We show the efficacy of our proposed approach in different tasks -- abductive reasoning, commonsense question answering, science question answering, and sentence completion.

23
29 Oct 2022

DiscoSense: Commonsense Reasoning with Discourse Connectives

prajjwal1/discosense 22 Oct 2022

We present DiscoSense, a benchmark for commonsense reasoning via understanding a wide variety of discourse connectives.

4
22 Oct 2022

Task Compass: Scaling Multi-task Pre-training with Task Prefix

cooelf/compassmtl 12 Oct 2022

Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models.

20
12 Oct 2022