no code implementations • 22 Apr 2020 • Xi Wu, Yang Guo, Jiefeng Chen, YIngyu Liang, Somesh Jha, Prasad Chalasani
Recent studies provide hints and failure examples for domain invariant representation learning, a common approach for this problem, but the explanations provided are somewhat different and do not provide a unified picture.
1 code implementation • ICML 2020 • Wei Zhang, Thomas Kobber Panum, Somesh Jha, Prasad Chalasani, David Page
We study the problem of learning Granger causality between event types from asynchronous, interdependent, multi-type event sequences.
1 code implementation • ICML 2020 • Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Somesh Jha, Xi Wu
Our first contribution is a theoretical exploration of how these two properties (when using attributions based on Integrated Gradients, or IG) are related to adversarial training, for a class of 1-layer networks (which includes logistic regression models for binary and multi-class classification); for these networks we show that (a) adversarial training using an $\ell_\infty$-bounded adversary produces models with sparse attribution vectors, and (b) natural model-training while encouraging stable explanations (via an extra term in the loss function), is equivalent to adversarial training.