Image Inpainting

277 papers with code • 12 benchmarks • 17 datasets

Image Inpainting is a task of reconstructing missing regions in an image. It is an important problem in computer vision and an essential functionality in many imaging and graphics applications, e.g. object removal, image restoration, manipulation, re-targeting, compositing, and image-based rendering.

Source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling

Image source: High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling

Libraries

Use these libraries to find Image Inpainting models and implementations

Latest papers with no code

Fill in the ____ (a Diffusion-based Image Inpainting Pipeline)

no code yet • 24 Mar 2024

Image inpainting is the process of taking an image and generating lost or intentionally occluded portions.

Inpainting-Driven Mask Optimization for Object Removal

no code yet • 23 Mar 2024

In our method, this domain gap is resolved by training the inpainting network with object masks extracted by segmentation, and such object masks are also used in the inference step.

HySim: An Efficient Hybrid Similarity Measure for Patch Matching in Image Inpainting

no code yet • 21 Mar 2024

In this sense, there is still a need for model-driven approaches in case of application constrained with data availability and quality, especially for those related for time series forecasting using image inpainting techniques.

CoCoCo: Improving Text-Guided Video Inpainting for Better Consistency, Controllability and Compatibility

no code yet • 18 Mar 2024

To this end, this paper proposes a novel text-guided video inpainting model that achieves better consistency, controllability and compatibility.

Attack Deterministic Conditional Image Generative Models for Diverse and Controllable Generation

no code yet • 13 Mar 2024

Given that many deterministic conditional image generative models have been able to produce high-quality yet fixed results, we raise an intriguing question: is it possible for pre-trained deterministic conditional image generative models to generate diverse results without changing network structures or parameters?

Open-Vocabulary Scene Text Recognition via Pseudo-Image Labeling and Margin Loss

no code yet • 12 Mar 2024

In this paper, we propose a novel open-vocabulary text recognition framework, Pseudo-OCR, to recognize OOV words.

Generative AI in Vision: A Survey on Models, Metrics and Applications

no code yet • 26 Feb 2024

Generative AI models have revolutionized various fields by enabling the creation of realistic and diverse data samples.

Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction

no code yet • 6 Feb 2024

In this work, we first provide an analysis of how DIP recovers information from undersampled imaging measurements by analyzing the training dynamics of the underlying networks in the kernel regime for different architectures.

Panoramic Image Inpainting With Gated Convolution And Contextual Reconstruction Loss

no code yet • 5 Feb 2024

In response to these challenges, we propose a panoramic image inpainting framework that consists of a Face Generator, a Cube Generator, a side branch, and two discriminators.

LatentPaint: Image Inpainting in Latent Space with Diffusion Models

no code yet • IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024

Image inpainting is generally done using either a domain-specific (preconditioned) model or a generic model that is postconditioned at inference time.