no code implementations • 24 Jan 2024 • Vidit Jain, Mukund Rungta, Yuchen Zhuang, Yue Yu, Zeyu Wang, Mu Gao, Jeffrey Skolnick, Chao Zhang
The best-performing models aim to learn a static representation by combining document and hierarchical label information.
no code implementations • 10 Jul 2022 • Kunal Dahiya, Nilesh Gupta, Deepak Saini, Akshay Soni, Yajun Wang, Kushal Dave, Jian Jiao, Gururaj K, Prasenjit Dey, Amit Singh, Deepesh Hada, Vidit Jain, Bhawna Paliwal, Anshul Mittal, Sonu Mehta, Ramachandran Ramjee, Sumeet Agarwal, Purushottam Kar, Manik Varma
This paper identifies that memory overheads of popular negative mining techniques often force mini-batch sizes to remain small and slow training down.
no code implementations • ICLR 2022 • Yatin Nandwani, Vidit Jain, Mausam, Parag Singla
One drawback of the proposed architectures, which are often based on Graph Neural Networks (GNN), is that they cannot generalize across the size of the output space from which variables are assigned a value, for example, set of colors in a GCP, or board-size in sudoku.
no code implementations • 6 Jan 2021 • Vidit Jain, Maitree Leekha, Rajiv Ratn Shah, Jainendra Shukla
To analyze our identification module's feasibility, we compared the backchannel prediction models trained on (a) manually-annotated and (b) semi-supervised labels.