Video Inpainting

42 papers with code • 4 benchmarks • 12 datasets

The goal of Video Inpainting is to fill in missing regions of a given video sequence with contents that are both spatially and temporally coherent. Video Inpainting, also known as video completion, has many real-world applications such as undesired object removal and video restoration.

Source: Deep Flow-Guided Video Inpainting

Latest papers with no code

Multilateral Temporal-view Pyramid Transformer for Video Inpainting Detection

no code yet • 17 Apr 2024

The task of video inpainting detection is to expose the pixel-level inpainted regions within a video sequence.

Towards Online Real-Time Memory-based Video Inpainting Transformers

no code yet • 24 Mar 2024

Video inpainting tasks have seen significant improvements in recent years with the rise of deep neural networks and, in particular, vision transformers.

CoCoCo: Improving Text-Guided Video Inpainting for Better Consistency, Controllability and Compatibility

no code yet • 18 Mar 2024

To this end, this paper proposes a novel text-guided video inpainting model that achieves better consistency, controllability and compatibility.

Towards Realistic Landmark-Guided Facial Video Inpainting Based on GANs

no code yet • 14 Feb 2024

Facial video inpainting plays a crucial role in a wide range of applications, including but not limited to the removal of obstructions in video conferencing and telemedicine, enhancement of facial expression analysis, privacy protection, integration of graphical overlays, and virtual makeup.

Reimagining Reality: A Comprehensive Survey of Video Inpainting Techniques

no code yet • 31 Jan 2024

This paper offers a comprehensive analysis of recent advancements in video inpainting techniques, a critical subset of computer vision and artificial intelligence.

Lumiere: A Space-Time Diffusion Model for Video Generation

no code yet • 23 Jan 2024

We introduce Lumiere -- a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion -- a pivotal challenge in video synthesis.

Towards Language-Driven Video Inpainting via Multimodal Large Language Models

no code yet • 18 Jan 2024

We introduce a new task -- language-driven video inpainting, which uses natural language instructions to guide the inpainting process.

Deep Learning-based Image and Video Inpainting: A Survey

no code yet • 7 Jan 2024

The goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting.

Infusion: Internal Diffusion for Video Inpainting

no code yet • 2 Nov 2023

We show that in the case of video inpainting, thanks to the highly auto-similar nature of videos, the training of a diffusion model can be restricted to the video to inpaint and still produce very satisfying results.

UVL: A Unified Framework for Video Tampering Localization

no code yet • 28 Sep 2023

These features are widely present in different types of synthetic forgeries and help improve generalization for detecting unknown videos.