Search Results for author: Yogarshi Vyas

Found 23 papers, 5 papers with code

A Multi-Modal Multilingual Benchmark for Document Image Classification

no code implementations25 Oct 2023 Yoshinari Fujinuma, Siddharth Varia, Nishant Sankaran, Srikar Appalaraju, Bonan Min, Yogarshi Vyas

Document image classification is different from plain-text document classification and consists of classifying a document by understanding the content and structure of documents such as forms, emails, and other such documents.

Classification Document Classification +4

Taxonomy Expansion for Named Entity Recognition

no code implementations22 May 2023 Karthikeyan K, Yogarshi Vyas, Jie Ma, Giovanni Paolini, Neha Anna John, Shuai Wang, Yassine Benajiba, Vittorio Castelli, Dan Roth, Miguel Ballesteros

We experiment with 6 diverse datasets and show that PLM consistently performs better than most other approaches (0. 5 - 2. 5 F1), including in novel settings for taxonomy expansion not considered in prior work.

named-entity-recognition Named Entity Recognition +2

Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents

no code implementations11 Oct 2022 Muhammad Khalifa, Yogarshi Vyas, Shuai Wang, Graham Horwood, Sunil Mallya, Miguel Ballesteros

The standard classification setting where categories are fixed during both training and testing falls short in dynamic environments where new document categories could potentially emerge.

Classification Document Classification +1

Linking Entities to Unseen Knowledge Bases with Arbitrary Schemas

no code implementations NAACL 2021 Yogarshi Vyas, Miguel Ballesteros

In entity linking, mentions of named entities in raw text are disambiguated against a knowledge base (KB).

Attribute Entity Linking

Weakly Supervised Cross-lingual Semantic Relation Classification via Knowledge Distillation

no code implementations IJCNLP 2019 Yogarshi Vyas, Marine Carpuat

Our classifier relies on a novel attention-based distillation approach to account for translation ambiguity when transferring knowledge from English to cross-lingual settings.

Classification Cross-Lingual Transfer +5

Robust Cross-lingual Hypernymy Detection using Dependency Context

1 code implementation NAACL 2018 Shyam Upadhyay, Yogarshi Vyas, Marine Carpuat, Dan Roth

We propose BISPARSE-DEP, a family of unsupervised approaches for cross-lingual hypernymy detection, which learns sparse, bilingual word embeddings based on dependency contexts.

Natural Language Inference Word Embeddings

Identifying Semantic Divergences in Parallel Text without Annotations

1 code implementation NAACL 2018 Yogarshi Vyas, Xing Niu, Marine Carpuat

Recognizing that even correct translations are not always semantically equivalent, we automatically detect meaning divergences in parallel sentence pairs with a deep neural model of bilingual semantic similarity which can be trained for any parallel corpus without any manual annotation.

Machine Translation Semantic Similarity +3

Detecting Cross-Lingual Semantic Divergence for Neural Machine Translation

no code implementations WS 2017 Marine Carpuat, Yogarshi Vyas, Xing Niu

Parallel corpora are often not as parallel as one might assume: non-literal translations and noisy translations abound, even in curated corpora routinely used for training and evaluation.

Domain Adaptation Machine Translation +3

The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives

2 code implementations CVPR 2017 Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daumé III, Larry Davis

While computers can now describe what is explicitly depicted in natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels.

Parser for Abstract Meaning Representation using Learning to Search

no code implementations26 Oct 2015 Sudha Rao, Yogarshi Vyas, Hal Daume III, Philip Resnik

We develop a novel technique to parse English sentences into Abstract Meaning Representation (AMR) using SEARN, a Learning to Search approach, by modeling the concept and the relation learning in a unified framework.

Cannot find the paper you are looking for? You can Submit a new open access paper.