Search Results for author: Jaspreet Singh

Found 14 papers, 5 papers with code

Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks

no code implementations NAACL (ACL) 2022 Weiyi Lu, Sunny Rajagopalan, Priyanka Nigam, Jaspreet Singh, Xiaodi Sun, Yi Xu, Belinda Zeng, Trishul Chilimbi

However, one issue that often arises in MTL is the convergence speed between tasks varies due to differences in task difficulty, so it can be a challenge to simultaneously achieve the best performance on all tasks with a single model checkpoint.

Knowledge Distillation Multi-Task Learning

Data Augmentation for Sample Efficient and Robust Document Ranking

no code implementations26 Nov 2023 Abhijit Anand, Jurek Leonhardt, Jaspreet Singh, Koustav Rudra, Avishek Anand

We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model.

Data Augmentation Document Ranking

Learning Invariant Representations for Equivariant Neural Networks Using Orthogonal Moments

1 code implementation22 Sep 2022 Jaspreet Singh, Chandan Singh

The final classification layer in equivariant neural networks is invariant to different affine geometric transformations such as rotation, reflection and translation, and the scalar value is obtained by either eliminating the spatial dimensions of filter responses using convolution and down-sampling throughout the network or average is taken over the filter responses.

Rotated MNIST Translation

Towards Axiomatic Explanations for Neural Ranking Models

no code implementations15 Jun 2021 Michael Völske, Alexander Bondarenko, Maik Fröbe, Matthias Hagen, Benno Stein, Jaspreet Singh, Avishek Anand

We investigate whether one can explain the behavior of neural ranking models in terms of their congruence with well understood principles of document ranking by using established theories from axiomatic IR.

Document Ranking Information Retrieval +1

BERTnesia: Investigating the capture and forgetting of knowledge in BERT

1 code implementation EMNLP (BlackboxNLP) 2020 Jonas Wallat, Jaspreet Singh, Avishek Anand

We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.

Knowledge Base Completion Language Modelling +3

Dissonance Between Human and Machine Understanding

no code implementations18 Jan 2021 Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, Avishek Anand

Are humans consistently better at selecting features that make image recognition more accurate?

Attribute Autonomous Vehicles +2

Single-sequence and profile-based prediction of RNA solvent accessibility using dilated convolutional neural network

1 code implementation27 Oct 2020 Anil Kumar Hanumanthappa, Jaswinder Singh, Kuldip Paliwal, Jaspreet Singh, Yaoqi Zhou

Motivation: RNA solvent accessibility, similar to protein solvent accessibility, reflects the structural regions that are accessible to solvents or other functional biomolecules, and plays an important role for structural and functional characterization.

Valid Explanations for Learning to Rank Models

no code implementations29 Apr 2020 Jaspreet Singh, Zhenye Wang, Megha Khosla, Avishek Anand

In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.

Learning-To-Rank valid

AMUSED: A Multi-Stream Vector Representation Method for Use in Natural Dialogue

no code implementations LREC 2020 Gaurav Kumar, Rishabh Joshi, Jaspreet Singh, Promod Yenigalla

The problem of building a coherent and non-monotonous conversational agent with proper discourse and coverage is still an area of open research.

Retrieval Sentence +1

Toxicity Prediction by Multimodal Deep Learning

no code implementations19 Jul 2019 Abdul Karim, Jaspreet Singh, Avinash Mishra, Abdollah Dehzangi, M. A. Hakim Newton, Abdul Sattar

Prediction of toxicity levels of chemical compounds is an important issue in Quantitative Structure-Activity Relationship (QSAR) modeling.

Multimodal Deep Learning

Asynchronous Training of Word Embeddings for Large Text Corpora

1 code implementation7 Dec 2018 Avishek Anand, Megha Khosla, Jaspreet Singh, Jan-Hendrik Zab, Zijian Zhang

In this paper, we propose a scalable approach to train word embeddings by partitioning the input space instead in order to scale to massive text corpora while not sacrificing the performance of the embeddings.

Information Retrieval Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.