no code implementations • 3 Jun 2023 • Minh Van Nguyen, Kishan Kc, Toan Nguyen, Thien Huu Nguyen, Ankit Chadha, Thuy Vu
In this paper, we propose to improve the candidate scoring by explicitly incorporating the dependencies between question-context and answer-context into the final representation of a candidate.
1 code implementation • 30 May 2023 • Vaibhav Kumar, Hana Koorehdavoudi, Masud Moshtaghi, Amita Misra, Ankit Chadha, Emilio Ferrara
We propose CHRT (Control Hidden Representation Transformation) - a controlled language generation framework that steers large language models to generate text pertaining to certain attributes (such as toxicity).
no code implementations • 25 May 2023 • Shivanshu Gupta, Yoshitomo Matsubara, Ankit Chadha, Alessandro Moschitti
While impressive performance has been achieved on the task of Answer Sentence Selection (AS2) for English, the same does not hold for languages that lack large labeled datasets.
no code implementations • 9 May 2022 • Aiswarya Sankar, Ankit Chadha
Abstractive multi document summarization has evolved as a task through the basic sequence to sequence approaches to transformer and graph based techniques.
no code implementations • NAACL 2022 • Peyman Passban, Tanya Roosta, Rahul Gupta, Ankit Chadha, Clement Chung
Training mixed-domain translation models is a complex task that demands tailored architectures and costly data preparation techniques.
no code implementations • 12 Dec 2021 • Tanya Roosta, Peyman Passban, Ankit Chadha
These new components are placed in between original layers.
no code implementations • 29 Sep 2021 • Aiswarya Sankar, Ankit Chadha
Abstractive multi document summarization has evolved as a task through the basic sequence to sequence approaches to transformer and graph based techniques.
no code implementations • 30 Dec 2019 • Ankit Chadha, Mohamed Masoud
We have tested the limits of learning fine-grained attention in Transformers to improve the summarization quality.
no code implementations • 14 Dec 2019 • Ankit Chadha, Rewa Sood
Our additions to the BERT architecture augment this attention with a more focused context to query (C2Q) and query to context (Q2C) attention via a set of modified Transformer encoder units.
no code implementations • 7 Nov 2013 • Ankit Chadha, Neha Satam, Vibha Wali
For testing purpose, an image is made to undergo rotation-translation-scaling correction and then given to network.