Sentence Completion
45 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Sentence Completion models and implementationsLatest papers
Language Model Sentence Completion with a Parser-Driven Rhetorical Control Method
Controlled text generation (CTG) seeks to guide large language model (LLM) output to produce text that conforms to desired criteria.
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
Instruction tuning, a successful paradigm, enhances the ability of LLMs to follow natural language instructions and exhibit robust generalization across a wide range of tasks.
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module.
mahaNLP: A Marathi Natural Language Processing Library
We present mahaNLP, an open-source natural language processing (NLP) library specifically built for the Marathi language.
BTRec: BERT-Based Trajectory Recommendation for Personalized Tours
An essential task for tourists having a pleasant holiday is to have a well-planned itinerary with relevant recommendations, especially when visiting unfamiliar cities.
Mistral 7B
We introduce Mistral 7B v0. 1, a 7-billion-parameter language model engineered for superior performance and efficiency.
Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
In this work, we study structured pruning as an effective means to develop smaller LLMs from pre-trained, larger models.
Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and Nationality Bias in Generative Models
LLMs are increasingly powerful and widely used to assist users in a variety of tasks.
Llama 2: Open Foundation and Fine-Tuned Chat Models
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning
For in-context learning, we test InstructGPT models and find that most prompt strategies are not successful, including those using step-by-step reasoning.