Sentence Completion

45 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Sentence Completion models and implementations

Most implemented papers

Factuality Enhanced Language Models for Open-Ended Text Generation

nayeon7lee/factualityprompt 9 Jun 2022

In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation.

Recurrent Memory Networks for Language Modeling

ketranm/RMN NAACL 2016

In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data.

CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense

Websail-NU/AQuA 8 Apr 2019

To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.

HellaSwag: Can a Machine Really Finish Your Sentence?

facebookresearch/text_characterization_toolkit ACL 2019

In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset.

Muppet: Massive Multi-task Representations with Pre-Finetuning

facebook/muppet-roberta-base EMNLP 2021

We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Training Compute-Optimal Large Language Models

karpathy/llama2.c 29 Mar 2022

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

joeljang/elm 7 Feb 2023

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

kaistai/cot-collection 23 May 2023

Furthermore, we show that instruction tuning with CoT Collection allows LMs to possess stronger few-shot learning capabilities on 4 domain-specific tasks, resulting in an improvement of +2. 24% (Flan-T5 3B) and +2. 37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until the max length by a +13. 98% margin.