Diffusion for Natural Image Matting

10 Dec 2023  ยท  Yihan Hu, Yiheng Lin, Wei Wang, Yao Zhao, Yunchao Wei, Humphrey Shi ยท

We aim to leverage diffusion to address the challenging image matting task. However, the presence of high computational overhead and the inconsistency of noise sampling between the training and inference processes pose significant obstacles to achieving this goal. In this paper, we present DiffMatte, a solution designed to effectively overcome these challenges. First, DiffMatte decouples the decoder from the intricately coupled matting network design, involving only one lightweight decoder in the iterations of the diffusion process. With such a strategy, DiffMatte mitigates the growth of computational overhead as the number of samples increases. Second, we employ a self-aligned training strategy with uniform time intervals, ensuring a consistent noise sampling between training and inference across the entire time domain. Our DiffMatte is designed with flexibility in mind and can seamlessly integrate into various modern matting architectures. Extensive experimental results demonstrate that DiffMatte not only reaches the state-of-the-art level on the Composition-1k test set, surpassing the best methods in the past by 5% and 15% in the SAD metric and MSE metric respectively, but also show stronger generalization ability in other benchmarks.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Matting AIM-500 DiffMatte SAD 16.31 # 1
MSE 0.0033 # 1
MAD 0.0098 # 1
Conn. 15.98 # 1
Grad. 15.68 # 1
Image Matting Composition-1K DiffMatte MSE 2.26 # 1
SAD 17.15 # 1
Grad 5.13 # 2
Conn 11.42 # 1
Image Matting Distinctions-646 DiffMatte SAD 15.50 # 1
MSE 0.0015 # 1
Grad 7.20 # 2
Conn 13.29 # 2
Trimap โˆš # 1

Methods