Search Results for author: Mandar Joshi

Found 25 papers, 18 papers with code

From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces

1 code implementation NeurIPS 2023 Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova

Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available.

Instruction Following

DePlot: One-shot visual language reasoning by plot-to-table translation

1 code implementation20 Dec 2022 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun

Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24. 0% improvement over finetuned SOTA on human-written queries from the task of chart QA.

Chart Question Answering Factual Inconsistency Detection in Chart Captioning +3

CM3: A Causal Masked Multimodal Model of the Internet

no code implementations19 Jan 2022 Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer

We introduce CM3, a family of causally masked generative models trained over a large corpus of structured multi-modal documents that can contain both text and image tokens.

Entity Disambiguation Entity Linking

DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions

1 code implementation ACL 2021 Weijia Shi, Mandar Joshi, Luke Zettlemoyer

Short textual descriptions of entities provide summaries of their key attributes and have been shown to be useful sources of background knowledge for tasks such as entity linking and question answering.

Document Summarization Entity Linking +2

DESCGEN: A Distantly Supervised Dataset for Generating Abstractive Entity Descriptions

1 code implementation9 Jun 2021 Weijia Shi, Mandar Joshi, Luke Zettlemoyer

Short textual descriptions of entities provide summaries of their key attributes and have been shown to be useful sources of background knowledge for tasks such as entity linking and question answering.

Entity Linking Question Answering

Realistic Evaluation Principles for Cross-document Coreference Resolution

1 code implementation Joint Conference on Lexical and Computational Semantics 2021 Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, Ido Dagan

We point out that common evaluation practices for cross-document coreference resolution have been unrealistically permissive in their assumed settings, yielding inflated results.

coreference-resolution Cross Document Coreference Resolution

Cross-document Coreference Resolution over Predicted Mentions

1 code implementation Findings (ACL) 2021 Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, Ido Dagan

Here, we introduce the first end-to-end model for CD coreference resolution from raw text, which extends the prominent model for within-document coreference to the CD setting.

coreference-resolution Cross Document Coreference Resolution

FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary

no code implementations EACL 2021 Terra Blevins, Mandar Joshi, Luke Zettlemoyer

Current models for Word Sense Disambiguation (WSD) struggle to disambiguate rare senses, despite reaching human performance on global WSD metrics.

Transfer Learning Word Sense Disambiguation

Streamlining Cross-Document Coreference Resolution: Evaluation and Modeling

2 code implementations23 Sep 2020 Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, Ido Dagan

Recent evaluation protocols for Cross-document (CD) coreference resolution have often been inconsistent or lenient, leading to incomparable results across works and overestimation of performance.

coreference-resolution Cross Document Coreference Resolution +2

An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction

2 code implementations EMNLP 2020 Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, Luke Zettlemoyer

Decisions of complex language understanding models can be rationalized by limiting their inputs to a relevant subsequence of the original text.

Contextualized Representations Using Textual Encyclopedic Knowledge

no code implementations24 Apr 2020 Mandar Joshi, Kenton Lee, Yi Luan, Kristina Toutanova

We present a method to represent input texts by contextualizing them jointly with dynamically retrieved textual encyclopedic background knowledge from multiple documents.

Language Modelling Reading Comprehension +1

BERT for Coreference Resolution: Baselines and Analysis

2 code implementations IJCNLP 2019 Mandar Joshi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer

We apply BERT to coreference resolution, achieving strong improvements on the OntoNotes (+3. 9 F1) and GAP (+11. 5 F1) benchmarks.

Ranked #10 on Coreference Resolution on CoNLL 2012 (using extra training data)

RoBERTa: A Robustly Optimized BERT Pretraining Approach

58 code implementations26 Jul 2019 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

 Ranked #1 on Only Connect Walls Dataset Task 1 (Grouping) on OCW (Wasserstein Distance (WD) metric, using extra training data)

Document Image Classification Language Modelling +13

pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference

3 code implementations NAACL 2019 Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer

Reasoning about implied relationships (e. g., paraphrastic, common sense, encyclopedic) between pairs of words is crucial for many cross-sentence inference problems.

Common Sense Reasoning Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.