no code implementations • 18 Apr 2024 • Thivin Anandh, Divij Ghose, Himanshu Jain, Sashikumaar Ganesan
Variational Physics-Informed Neural Networks (VPINNs) utilize a variational loss function to solve partial differential equations, mirroring Finite Element Analysis techniques.
no code implementations • NeurIPS 2023 • Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, Felix Yu
We show that the optimal draft selection algorithm (transport plan) can be computed via linear programming, whose best-known runtime is exponential in $k$.
no code implementations • 18 Aug 2022 • Lovish Madaan, Srinadh Bhojanapalli, Himanshu Jain, Prateek Jain
Based on such hierarchical navigation, we design Treeformer which can use one of two efficient attention layers -- TF-Attention and TC-Attention.
no code implementations • 14 Aug 2022 • Manzil Zaheer, Ankit Singh Rawat, Seungyeon Kim, Chong You, Himanshu Jain, Andreas Veit, Rob Fergus, Sanjiv Kumar
In this paper, we propose the teacher-guided training (TGT) framework for training a high-quality compact model that leverages the knowledge acquired by pretrained generative models, while obviating the need to go through a large volume of data.
1 code implementation • 12 Nov 2021 • Kunal Dahiya, Deepak Saini, Anshul Mittal, Ankush Shaw, Kushal Dave, Akshay Soni, Himanshu Jain, Sumeet Agarwal, Manik Varma
Scalability and accuracy are well recognized challenges in deep extreme multi-label learning where the objective is to train architectures for automatically annotating a data point with the most relevant subset of labels from an extremely large label set.
1 code implementation • 13 Oct 2021 • Srinadh Bhojanapalli, Ayan Chakrabarti, Andreas Veit, Michal Lukasik, Himanshu Jain, Frederick Liu, Yin-Wen Chang, Sanjiv Kumar
Pairwise dot product-based attention allows Transformers to exchange information between tokens in an input-dependent way, and is key to their success across diverse applications in language and vision.
no code implementations • 16 Jun 2021 • Srinadh Bhojanapalli, Ayan Chakrabarti, Himanshu Jain, Sanjiv Kumar, Michal Lukasik, Andreas Veit
State-of-the-art transformer models use pairwise dot-product based self-attention, which comes at a computational cost quadratic in the input sequence length.
no code implementations • EMNLP 2020 • Michal Lukasik, Himanshu Jain, Aditya Krishna Menon, Seungyeon Kim, Srinadh Bhojanapalli, Felix Yu, Sanjiv Kumar
Label smoothing has been shown to be an effective regularization strategy in classification, that prevents overfitting and helps in label de-noising.
3 code implementations • ICLR 2021 • Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples.
Ranked #48 on Long-tail Learning on ImageNet-LT
no code implementations • NeurIPS 2020 • Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan
Adversarial robustness measures the susceptibility of a classifier to imperceptible perturbations made to the inputs at test time.
no code implementations • 25 Sep 2019 • Kunal Dahiya, Anshul Mittal, Deepak Saini, Kushal Dave, Himanshu Jain, Sumeet Agarwal, Manik Varma
The objective in deep extreme multi-label learning is to jointly learn feature representations and classifiers to automatically tag data points with the most relevant subset of labels from an extremely large label set.
no code implementations • 9 Oct 2017 • Himanshu Jain, Archana Praveen Kumar
The experimental results show the validity of the procedure.
no code implementations • 9 Oct 2017 • Himanshu Jain, Archana Praveen Kumar
This paper proposes a sequential algorithm that is very easy to understand and modify based on application to perform the thinning of multi-dimensional binary patterns.
no code implementations • CVPR 2017 • Endri Dibra, Himanshu Jain, Cengiz Oztireli, Remo Ziegler, Markus Gross
In this work, we present a novel method for capturing human body shape from a single scaled silhouette.
no code implementations • NeurIPS 2015 • Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, Prateek Jain
The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set.
no code implementations • 9 Jul 2015 • Kush Bhatia, Himanshu Jain, Purushottam Kar, Prateek Jain, Manik Varma
Embedding based approaches make training and prediction tractable by assuming that the training label matrix is low-rank and hence the effective number of labels can be reduced by projecting the high dimensional label vectors onto a low dimensional linear subspace.
Extreme Multi-Label Classification General Classification +2