no code implementations • NAACL (ACL) 2022 • Weiyi Lu, Sunny Rajagopalan, Priyanka Nigam, Jaspreet Singh, Xiaodi Sun, Yi Xu, Belinda Zeng, Trishul Chilimbi
However, one issue that often arises in MTL is the convergence speed between tasks varies due to differences in task difficulty, so it can be a challenge to simultaneously achieve the best performance on all tasks with a single model checkpoint.
no code implementations • 26 Nov 2023 • Abhijit Anand, Jurek Leonhardt, Jaspreet Singh, Koustav Rudra, Avishek Anand
We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model.
no code implementations • 23 Jan 2023 • Gunbir Singh Baveja, Jaspreet Singh
The testing RMSE came out to be around $0. 097$.
1 code implementation • 22 Sep 2022 • Jaspreet Singh, Chandan Singh
The final classification layer in equivariant neural networks is invariant to different affine geometric transformations such as rotation, reflection and translation, and the scalar value is obtained by either eliminating the spatial dimensions of filter responses using convolution and down-sampling throughout the network or average is taken over the filter responses.
no code implementations • 15 Jun 2021 • Michael Völske, Alexander Bondarenko, Maik Fröbe, Matthias Hagen, Benno Stein, Jaspreet Singh, Avishek Anand
We investigate whether one can explain the behavior of neural ranking models in terms of their congruence with well understood principles of document ranking by using established theories from axiomatic IR.
1 code implementation • EMNLP (BlackboxNLP) 2020 • Jonas Wallat, Jaspreet Singh, Avishek Anand
We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.
no code implementations • 18 Jan 2021 • Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, Avishek Anand
Are humans consistently better at selecting features that make image recognition more accurate?
1 code implementation • 27 Oct 2020 • Anil Kumar Hanumanthappa, Jaswinder Singh, Kuldip Paliwal, Jaspreet Singh, Yaoqi Zhou
Motivation: RNA solvent accessibility, similar to protein solvent accessibility, reflects the structural regions that are accessible to solvents or other functional biomolecules, and plays an important role for structural and functional characterization.
1 code implementation • 19 Oct 2020 • Jonas Wallat, Jaspreet Singh, Avishek Anand
We found that ranking models forget the least and retain more knowledge in their final layer.
no code implementations • 29 Apr 2020 • Jaspreet Singh, Zhenye Wang, Megha Khosla, Avishek Anand
In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
no code implementations • LREC 2020 • Gaurav Kumar, Rishabh Joshi, Jaspreet Singh, Promod Yenigalla
The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research.
no code implementations • 19 Jul 2019 • Abdul Karim, Jaspreet Singh, Avinash Mishra, Abdollah Dehzangi, M. A. Hakim Newton, Abdul Sattar
Prediction of toxicity levels of chemical compounds is an important issue in Quantitative Structure-Activity Relationship (QSAR) modeling.
no code implementations • 15 Jul 2019 • Zeon Trevor Fernando, Jaspreet Singh, Avishek Anand
In image classification, the reference input tends to be a plain black image.
1 code implementation • 7 Dec 2018 • Avishek Anand, Megha Khosla, Jaspreet Singh, Jan-Hendrik Zab, Zijian Zhang
In this paper, we propose a scalable approach to train word embeddings by partitioning the input space instead in order to scale to massive text corpora while not sacrificing the performance of the embeddings.