Visual Prompting

32 papers with code • 0 benchmarks • 0 datasets

Visual Prompting is the task of streamlining computer vision processes by harnessing the power of prompts, inspired by the breakthroughs of text prompting in NLP. This innovative approach involves using a few visual prompts to swiftly convert an unlabeled dataset into a deployed model, significantly reducing development time for both individual projects and enterprise solutions.

Most implemented papers

Diversity-Aware Meta Visual Prompting

shikiw/dam-vp CVPR 2023

We present Diversity-Aware Meta Visual Prompting~(DAM-VP), an efficient and effective prompting method for transferring pre-trained models to downstream tasks with frozen backbone.

Explicit Visual Prompting for Low-Level Structure Segmentations

nifangbaage/explicit-visual-prompt CVPR 2023

Different from the previous visual prompting which is typically a dataset-level implicit embedding, our key insight is to enforce the tunable parameters focusing on the explicit visual content from each individual image, i. e., the features from frozen patch embeddings and the input's high-frequency components.

Exploring the Benefits of Visual Prompting in Differential Privacy

ezzzli/prompt-pate ICCV 2023

Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.

BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning

changdaeoh/blackvip CVPR 2023

In this work, we propose black-box visual prompting (BlackVIP), which efficiently adapts the PTMs without knowledge about model architectures and parameters.

UPGPT: Universal Diffusion Model for Person Image Generation, Editing and Pose Transfer

soon-yau/upgpt 18 Apr 2023

Text-to-image models (T2I) such as StableDiffusion have been used to generate high quality images of people.

Adapting Pre-trained Language Models to Vision-Language Tasks via Dynamic Visual Prompting

hsb1357173526/dynamic_visual_prompting 1 Jun 2023

In addition, we also experiment DVP with the recently popular adapter approach to keep the most parameters of PLMs intact when adapting to VL tasks, helping PLMs achieve a quick shift between single- and multi-modal tasks.

Fine-Grained Visual Prompting

ylingfeng/FGVP NeurIPS 2023

Previous works have suggested that incorporating visual prompts, such as colorful boxes or circles, can improve the ability of models to recognize objects of interest.

Fast Segment Anything

casia-iva-lab/fastsam 21 Jun 2023

In this paper, we propose a speed-up alternative method for this fundamental task with comparable performance.

Visual Instruction Inversion: Image Editing via Visual Prompting

thaoshibe/visii 26 Jul 2023

Given pairs of example that represent the "before" and "after" images of an edit, our goal is to learn a text-based editing direction that can be used to perform the same edit on new images.

Uncovering the Hidden Cost of Model Compression

landskape-ai/reprogram_lt 29 Aug 2023

This empirical investigation underscores the need for a nuanced understanding beyond mere accuracy in sparse and quantized settings, thereby paving the way for further exploration in Visual Prompting techniques tailored for sparse and quantized models.