Video Inpainting
42 papers with code • 4 benchmarks • 12 datasets
The goal of Video Inpainting is to fill in missing regions of a given video sequence with contents that are both spatially and temporally coherent. Video Inpainting, also known as video completion, has many real-world applications such as undesired object removal and video restoration.
Datasets
Latest papers with no code
Multilateral Temporal-view Pyramid Transformer for Video Inpainting Detection
The task of video inpainting detection is to expose the pixel-level inpainted regions within a video sequence.
Towards Online Real-Time Memory-based Video Inpainting Transformers
Video inpainting tasks have seen significant improvements in recent years with the rise of deep neural networks and, in particular, vision transformers.
CoCoCo: Improving Text-Guided Video Inpainting for Better Consistency, Controllability and Compatibility
To this end, this paper proposes a novel text-guided video inpainting model that achieves better consistency, controllability and compatibility.
Towards Realistic Landmark-Guided Facial Video Inpainting Based on GANs
Facial video inpainting plays a crucial role in a wide range of applications, including but not limited to the removal of obstructions in video conferencing and telemedicine, enhancement of facial expression analysis, privacy protection, integration of graphical overlays, and virtual makeup.
Reimagining Reality: A Comprehensive Survey of Video Inpainting Techniques
This paper offers a comprehensive analysis of recent advancements in video inpainting techniques, a critical subset of computer vision and artificial intelligence.
Lumiere: A Space-Time Diffusion Model for Video Generation
We introduce Lumiere -- a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion -- a pivotal challenge in video synthesis.
Towards Language-Driven Video Inpainting via Multimodal Large Language Models
We introduce a new task -- language-driven video inpainting, which uses natural language instructions to guide the inpainting process.
Deep Learning-based Image and Video Inpainting: A Survey
The goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting.
Infusion: Internal Diffusion for Video Inpainting
We show that in the case of video inpainting, thanks to the highly auto-similar nature of videos, the training of a diffusion model can be restricted to the video to inpaint and still produce very satisfying results.
UVL: A Unified Framework for Video Tampering Localization
These features are widely present in different types of synthetic forgeries and help improve generalization for detecting unknown videos.