no code implementations • ACL (RepL4NLP) 2021 • Dung Thai, Raghuveer Thirukovalluru, Trapit Bansal, Andrew McCallum
In this work, we aim at directly learning text representations which leverage structured knowledge about entities mentioned in the text.
1 code implementation • Proceedings of the First International Conference on Automated Machine Learning 2022 • Trapit Bansal, Salaheddin Alzubi, Tong Wang, Jay-Yoon Lee, Andrew McCallum
Meta-Adapters perform competitively with state-of-the-art few-shot learning methods that require full fine-tuning, while only fine-tuning 0. 6% of the parameters.
no code implementations • 28 Dec 2021 • Akansha Singh Bansal, Trapit Bansal, David Irwin
Solar energy is now the cheapest form of electricity in history.
no code implementations • EMNLP 2021 • Trapit Bansal, Karthick Gunasekaran, Tong Wang, Tsendsuren Munkhdalai, Andrew McCallum
Meta-learning considers the problem of learning an efficient learning process that can leverage its past experience to accurately solve new tasks.
no code implementations • 27 Sep 2020 • Vaishnavi Kommaraju, Karthick Gunasekaran, Kun Li, Trapit Bansal, Andrew McCallum, Ivana Williams, Ana-Maria Istrate
We explore the suitability of unsupervised representation learning methods on biomedical text -- BioBERT, SciBERT, and BioSentVec -- for biomedical question answering.
1 code implementation • EMNLP 2020 • Trapit Bansal, Rishikesh Jha, Tsendsuren Munkhdalai, Andrew McCallum
We meta-train a transformer model on this distribution of tasks using a recent meta-learning framework.
no code implementations • 2 Dec 2019 • Trapit Bansal, Pat Verga, Neha Choudhary, Andrew McCallum
Understanding the meaning of text often involves reasoning about entities and their relationships.
2 code implementations • COLING 2020 • Trapit Bansal, Rishikesh Jha, Andrew McCallum
LEOPARD is trained with the state-of-the-art transformer architecture and shows better generalization to tasks not seen at all during training, with as few as 4 examples per label.
no code implementations • ACL 2019 • Trapit Bansal, Da-Cheng Juan, Sujith Ravi, Andrew McCallum
State-of-the-art models for knowledge graph completion aim at learning a fixed embedding representation of entities in a multi-relational graph which can generalize to infer unseen entity relationships at test time.
no code implementations • EMNLP 2018 • Nathan Greenberg, Trapit Bansal, Patrick Verga, Andrew McCallum
This paper presents a method for training a single CRF extractor from multiple datasets with disjoint or partially overlapping sets of entity types.
1 code implementation • ICLR 2018 • Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, Pieter Abbeel
Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence.
2 code implementations • ICLR 2018 • Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, Igor Mordatch
In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself.
no code implementations • 2 Aug 2017 • Dung Thai, Shikhar Murty, Trapit Bansal, Luke Vilnis, David Belanger, Andrew McCallum
In textual information extraction and other sequence labeling tasks it is now common to use recurrent neural networks (such as LSTM) to form rich embedded representations of long-term input co-occurrence patterns.
no code implementations • 22 Jun 2017 • Trapit Bansal, Arvind Neelakantan, Andrew McCallum
We introduce RelNet: a new model for relational reasoning.
no code implementations • 7 Sep 2016 • Trapit Bansal, David Belanger, Andrew McCallum
In a variety of application domains the content to be recommended to users is associated with text.
no code implementations • NeurIPS 2014 • Trapit Bansal, Chiranjib Bhattacharyya, Ravindran Kannan
Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded $l_1$ error.