EFormer: Enhanced Transformer towards Semantic-Contour Features of Foreground for Portraits Matting

24 Aug 2023  ·  Zitao Wang, Qiguang Miao, Peipei Zhao, Yue Xi ·

The portrait matting task aims to extract an alpha matte with complete semantics and finely-detailed contours. In comparison to CNN-based approaches, transformers with self-attention module have a better capacity to capture long-range dependencies and low-frequency semantic information of a portrait. However, the recent research shows that self-attention mechanism struggles with modeling high-frequency contour information and capturing fine contour details, which can lead to bias while predicting the portrait's contours. To deal with this issue, we propose EFormer to enhance the model's attention towards both of the low-frequency semantic and high-frequency contour features. For the high-frequency contours, our research demonstrates that cross-attention module between different resolutions can guide our model to allocate attention appropriately to these contour regions. Supported on this, we can successfully extract the high-frequency detail information around the portrait's contours, which are previously ignored by self-attention. Based on cross-attention module, we further build a semantic and contour detector (SCD) to accurately capture both of the low-frequency semantic and high-frequency contour features. And we design contour-edge extraction branch and semantic extraction branch to extract refined high-frequency contour features and complete low-frequency semantic information, respectively. Finally, we fuse the two kinds of features and leverage segmentation head to generate a predicted portrait matte. Experiments on VideoMatte240K (JPEG SD Format) and Adobe Image Matting (AIM) datasets demonstrate that EFormer outperforms previous portrait matte methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods