Image Inpainting
270 papers with code • 12 benchmarks • 17 datasets
Image Inpainting is a task of reconstructing missing regions in an image. It is an important problem in computer vision and an essential functionality in many imaging and graphics applications, e.g. object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering.
Source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling
Image source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling
Libraries
Use these libraries to find Image Inpainting models and implementationsLatest papers
Ambient Diffusion Posterior Sampling: Solving Inverse Problems with Diffusion Models trained on Corrupted Data
We open-source our code and the trained Ambient Diffusion MRI models: https://github. com/utcsilab/ambient-diffusion-mri .
Efficient Diffusion Model for Image Restoration by Residual Shifting
While diffusion-based image restoration (IR) methods have achieved remarkable success, they are still limited by the low inference speed attributed to the necessity of executing hundreds or even thousands of sampling steps.
BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion
Image inpainting, the process of restoring corrupted images, has seen significant advancements with the advent of diffusion models (DMs).
PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement
However, prompting remains challenging for novice users due to the complexity of the stable diffusion model and the non-trivial efforts required for iteratively editing and refining the text prompts.
Matrix Completion with Convex Optimization and Column Subset Selection
We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method, each dedicated to a different size problem.
Diffusion Model-Based Image Editing: A Survey
In this survey, we provide an exhaustive overview of existing methods using diffusion models for image editing, covering both theoretical and practical aspects in the field.
HINT: High-quality INPainting Transformer with Mask-Aware Encoding and Enhanced Attention
In this paper, we propose an end-to-end High-quality INpainting Transformer, abbreviated as HINT, which consists of a novel mask-aware pixel-shuffle downsampling module (MPD) to preserve the visible information extracted from the corrupted image while maintaining the integrity of the information available for high-level inferences made within the model.
Text Image Inpainting via Global Structure-Guided Diffusion Models
Leveraging the global structure of the text as a prior, the proposed GSDM develops an efficient diffusion model to recover clean texts.
Robust Stochastically-Descending Unrolled Networks
To tackle these problems, we provide deep unrolled architectures with a stochastic descent nature by imposing descending constraints during training.
HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models
Recent progress in text-guided image inpainting, based on the unprecedented success of text-to-image diffusion models, has led to exceptionally realistic and visually plausible results.