Search Results for author: Priyanka Nigam

Found 4 papers, 1 papers with code

Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks

no code implementations NAACL (ACL) 2022 Weiyi Lu, Sunny Rajagopalan, Priyanka Nigam, Jaspreet Singh, Xiaodi Sun, Yi Xu, Belinda Zeng, Trishul Chilimbi

However, one issue that often arises in MTL is the convergence speed between tasks varies due to differences in task difficulty, so it can be a challenge to simultaneously achieve the best performance on all tasks with a single model checkpoint.

Knowledge Distillation Multi-Task Learning

Embracing Structure in Data for Billion-Scale Semantic Product Search

no code implementations12 Oct 2021 Vihan Lakshman, Choon Hui Teo, Xiaowen Chu, Priyanka Nigam, Abhinandan Patni, Pooja Maknikar, SVN Vishwanathan

When training a dyadic model, one seeks to embed two different types of entities (e. g., queries and documents or users and movies) in a common vector space such that pairs with high relevance are positioned nearby.

Semantic Product Search

1 code implementation1 Jul 2019 Priyanka Nigam, Yiwei Song, Vijai Mohan, Vihan Lakshman, Weitian, Ding, Ankit Shingavi, Choon Hui Teo, Hao Gu, Bing Yin

To address these issues, we train a deep learning model for semantic matching using customer behavior data.

Cannot find the paper you are looking for? You can Submit a new open access paper.