CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation

19 Apr 2023  ·  Chirag P, Mukta Wagle, Ravi Kant Gupta, Pranav Jeevan, Amit Sethi ·

We propose a new technique called CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial training is commonly used for learning domain-invariant representations by reversing the gradients from a domain discriminator head to train the feature extractor layers of a neural network. We propose significant modifications to the adversarial head, its training objective, and the classifier head. With the aim of reducing class confusion, we introduce a sub-network which displaces the classifier outputs of the source and target domain samples in a learnable manner. We control this movement using a novel transport loss that spreads class clusters away from each other and makes it easier for the classifier to find the decision boundaries for both the source and target domains. The results of adding this new loss to a careful selection of previously proposed losses leads to improvement in UDA results compared to the previous state-of-the-art methods on benchmark datasets. We show the importance of the proposed loss term using ablation studies and visualization of the movement of target domain sample in representation space.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Domain Adaptation FHIST CHATTY+MCC Accuracy 74.7 # 1
Unsupervised Domain Adaptation Office-31 CHATTY + MCC Accuracy 89.9 # 2
Unsupervised Domain Adaptation Office-Home CHATTY+MCC Accuracy 73 # 8

Methods


No methods listed for this paper. Add relevant methods here