AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection

31 Aug 2022  ·  Maximilian Menke, Thomas Wenzel, Andreas Schwung ·

Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a challenge in deep learning. GAN-based image-to-image style-transfer is commonly applied to shrink the domain gap, but is unstable and decoupled from the object detection task. We propose AWADA, an Attention-Weighted Adversarial Domain Adaptation framework for creating a feedback loop between style-transformation and detection task. By constructing foreground object attention maps from object detector proposals, we focus the transformation on foreground object regions and stabilize style-transfer training. In extensive experiments and ablation studies, we show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Domain Adaptation BDD100k to Cityscapes AWADA mAP 31.5 # 1
Unsupervised Domain Adaptation Cityscapes to Foggy Cityscapes AWADA mAP@0.5 44.8 # 11
Unsupervised Domain Adaptation SIM10K to Cityscapes AWADA mAP@0.5 54.1 # 9

Methods


No methods listed for this paper. Add relevant methods here