Image Inpainting
276 papers with code • 12 benchmarks • 17 datasets
Image Inpainting is a task of reconstructing missing regions in an image. It is an important problem in computer vision and an essential functionality in many imaging and graphics applications, e.g. object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering.
Source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling
Image source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling
Libraries
Use these libraries to find Image Inpainting models and implementationsLatest papers
Diffusion Model-Based Image Editing: A Survey
In this survey, we provide an exhaustive overview of existing methods using diffusion models for image editing, covering both theoretical and practical aspects in the field.
HINT: High-quality INPainting Transformer with Mask-Aware Encoding and Enhanced Attention
In this paper, we propose an end-to-end High-quality INpainting Transformer, abbreviated as HINT, which consists of a novel mask-aware pixel-shuffle downsampling module (MPD) to preserve the visible information extracted from the corrupted image while maintaining the integrity of the information available for high-level inferences made within the model.
Text Image Inpainting via Global Structure-Guided Diffusion Models
Leveraging the global structure of the text as a prior, the proposed GSDM develops an efficient diffusion model to recover clean texts.
Robust Stochastically-Descending Unrolled Networks
To tackle these problems, we provide deep unrolled architectures with a stochastic descent nature by imposing descending constraints during training.
HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models
Recent progress in text-guided image inpainting, based on the unprecedented success of text-to-image diffusion models, has led to exceptionally realistic and visually plausible results.
Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge
Diffusion models possess powerful generative capabilities enabling the mapping of noise to data using reverse stochastic differential equations.
Towards Context-Stable and Visual-Consistent Image Inpainting
Recent progress in inpainting increasingly relies on generative models, leveraging their strong generation capabilities for addressing large irregular masks.
A Task is Worth One Word: Learning with Task Prompts for High-Quality Versatile Image Inpainting
This enables PowerPaint to accomplish various inpainting tasks by utilizing different task prompts, resulting in state-of-the-art performance.
AVID: Any-Length Video Inpainting with Diffusion Model
Given a video, a masked region at its initial frame, and an editing prompt, it requires a model to do infilling at each frame following the editing guidance while keeping the out-of-mask region intact.
INCODE: Implicit Neural Conditioning with Prior Knowledge Embeddings
INCODE comprises a harmonizer network and a composer network, where the harmonizer network dynamically adjusts key parameters of the activation function.