Prompt Engineering

201 papers with code • 16 benchmarks • 16 datasets

Prompt engineering is the process of designing and refining the prompts used to generate text from language models, such as GPT-3 or similar models. The goal of prompt engineering is to improve the quality and relevance of the generated text by carefully crafting the prompts to elicit the desired responses from the model.

Prompt engineering involves several steps, including selecting the appropriate model architecture and parameters, designing the prompt format and structure, selecting the appropriate task and training data, and fine-tuning the model using the selected prompt and data.

Prompt engineering is a crucial step in the development of language models, as it can greatly influence the quality and effectiveness of the model's responses. By carefully designing and refining the prompts used to generate text, researchers and developers can improve the accuracy and relevance of the model's output, making it more useful for a wide range of applications, including chatbots, language translation, content creation, and more.

Libraries

Use these libraries to find Prompt Engineering models and implementations

Most implemented papers

Learning Transferable Visual Models From Natural Language Supervision

openai/CLIP 26 Feb 2021

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.

Learning to Prompt for Vision-Language Models

kaiyangzhou/coop 2 Sep 2021

Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks.

Conditional Prompt Learning for Vision-Language Models

kaiyangzhou/coop CVPR 2022

With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets.

GPT Understands, Too

THUDM/P-tuning 18 Mar 2021

Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).

Multitask Prompted Training Enables Zero-Shot Task Generalization

bigscience-workshop/promptsource ICLR 2022

Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020).

Visual Prompt Tuning

KMnP/vpt 23 Mar 2022

The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning.

Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

zjunlp/DART ICLR 2022

Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.

Ask Me Anything: A simple strategy for prompting language models

hazyresearch/ama_prompting 5 Oct 2022

Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the model predictions, and therefore significant effort is dedicated towards designing a painstakingly "perfect prompt" for a task.

GPT Takes the Bar Exam

mjbommar/gpt-takes-the-bar-exam 29 Dec 2022

Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as "the Bar Exam," as a precondition for law practice.

MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models

bradyfu/awesome-multimodal-large-language-models 23 Jun 2023

Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image.