no code implementations • ICML 2020 • Vikas K. Garg, Stefanie Jegelka, Tommi Jaakkola
We address two fundamental questions about graph neural networks (GNNs).
no code implementations • 13 Feb 2020 • Vikas K. Garg, Adam Kalai, Katrina Ligett, Zhiwei Steven Wu
Domain generalization is the problem of machine learning when the training data and the test data come from different data domains.
no code implementations • 27 Aug 2019 • Vikas K. Garg, Inderjit S. Dhillon, Hsiang-Fu Yu
The architecture of Transformer is based entirely on self-attention, and has been shown to outperform models that employ recurrence on sequence transduction tasks such as machine translation.
no code implementations • 29 May 2019 • Vikas K. Garg, Tommi Jaakkola
We introduce a new class of context dependent, incomplete information games to serve as structured prediction models for settings with significant strategic interactions.
no code implementations • NeurIPS 2019 • Vikas K. Garg, Tommi Jaakkola
The transport problem is seeded with prior information about node importance, attributes, and edges in the graph.
no code implementations • NeurIPS 2019 • Vikas K. Garg, Tamar Pichkhadze
We resolve the fundamental problem of online decoding with general $n^{th}$ order ergodic Markov chain models.
no code implementations • NeurIPS 2018 • Vikas K. Garg, Ofer Dekel, Lin Xiao
We present a new machine learning technique for training small resource-constrained predictors.
no code implementations • NeurIPS 2018 • Vikas K. Garg, Adam Kalai
We introduce a framework to leverage knowledge acquired from a repository of (heterogeneous) supervised datasets to new unsupervised datasets.
no code implementations • 29 Dec 2016 • Vikas K. Garg, Adam Tauman Kalai
We introduce a new paradigm to investigate unsupervised learning, reducing unsupervised learning to supervised learning.
no code implementations • 25 Jun 2015 • Vikas K. Garg, Cynthia Rudin, Tommi Jaakkola
We present a framework for clustering with cluster-specific feature selection.
no code implementations • CVPR 2015 • Sukrit Shankar, Vikas K. Garg, Roberto Cipolla
To ameliorate this limitation, we propose Deep-Carving, a novel training procedure with CNNs, that helps the net efficiently carve itself for the task of multiple attribute prediction.