no code implementations • ICLR 2022 • David Acuna, Marc T Law, Guojun Zhang, Sanja Fidler
Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.
no code implementations • ICLR 2022 • Rafid Mahmood, Sanja Fidler, Marc T Law
Active learning is the process of training a model with limited labeled data by selecting a core subset of an unlabeled data pool to label.
no code implementations • NeurIPS 2021 • Marc T Law
The lack of geodesic between every pair of ultrahyperbolic points makes the task of learning parametric models (e. g., neural networks) difficult.
no code implementations • 1 Jan 2021 • David Acuna, Guojun Zhang, Marc T Law, Sanja Fidler
We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.
no code implementations • 28 Sep 2020 • Aayush Prakash, Shoubhik Debnath, Jean Francois Lafleche, Eric Cameracci, Gavriel State, Marc T Law
However, neural network models trained on synthetic data, do not perform well on real data because of the domain gap.
no code implementations • 27 Sep 2018 • Marc T Law, Jake Snell, Richard S Zemel
This formulation produces node representations close to the centroid of their descendants.