LatentPaint: Image Inpainting in Latent Space with Diffusion Models

Image inpainting is generally done using either a domain-specific (preconditioned) model or a generic model that is postconditioned at inference time. Preconditioned models are fast at inference time but extremely costly to train, requiring training on each domain they are applied to. Postconditioned models do not require any domain-specific training but are slow during inference, requiring multiple forward and backward passes to converge to a desirable solution. Here, we derive an approach that does not require any domain specific training, yet is fast at inference time. To solve the costly inference computational time, we perform the forward-backward fusion step on a latent space rather than the image space. This is solved with a newly proposed propagation module in the diffusion process. Experiments on a number of domains demonstrate our approach attains or improves state-of-the-art results with the advantages of preconditioned and postconditioned models and none of their disadvantages.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods