Search Results for author: Sarthak Garg

Found 8 papers, 2 papers with code

Unconditional Scene Graph Generation

no code implementations ICCV 2021 Sarthak Garg, Helisa Dhamo, Azade Farshad, Sabrina Musatian, Nassir Navab, Federico Tombari

Scene graphs, composed of nodes as objects and directed-edges as relationships among objects, offer an alternative representation of a scene that is more semantically grounded than images.

Anomaly Detection Graph Generation +3

Efficient Inference For Neural Machine Translation

no code implementations EMNLP (sustainlp) 2020 Yi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, Ilya Chatsviorkin

Large Transformer models have achieved state-of-the-art results in neural machine translation and have become standard in the field.

Machine Translation Translation

Learning to Relate from Captions and Bounding Boxes

no code implementations ACL 2019 Sarthak Garg, Joel Ruben Antony Moniz, Anshu Aviral, Priyatham Bollimpalli

In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision.

Image Captioning Relation Classification

Empirical Evaluation of Active Learning Techniques for Neural MT

no code implementations WS 2019 Xiangkai Zeng, Sarthak Garg, Rajen Chatterjee, Udhyakumar Nallasamy, Matthias Paulik

Finally, we propose a neural extension for an AL sampling method used in the context of phrase-based MT - Round Trip Translation Likelihood (RTTL).

Active Learning Machine Translation +3

Jointly Learning to Align and Translate with Transformer Models

1 code implementation IJCNLP 2019 Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik

The state of the art in machine translation (MT) is governed by neural approaches, which typically provide superior translation accuracy over statistical approaches.

Machine Translation Translation +1

Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces

1 code implementation ACL 2019 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Compression and Localization in Reinforcement Learning for ATARI Games

no code implementations20 Apr 2019 Joel Ruben Antony Moniz, Barun Patra, Sarthak Garg

Deep neural networks have become commonplace in the domain of reinforcement learning, but are often expensive in terms of the number of parameters needed.

Atari Games Model Compression +3

BLISS in Non-Isometric Embedding Spaces

no code implementations27 Sep 2018 Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, Graham Neubig

We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.

Bilingual Lexicon Induction Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.