Search Results for author: Sudha Rao

Found 26 papers, 11 papers with code

GRIM: GRaph-based Interactive narrative visualization for gaMes

no code implementations15 Nov 2023 Jorge Leandro, Sudha Rao, Michael Xu, Weijia Xu, Nebosja Jojic, Chris Brockett, Bill Dolan

\textbf{GRIM}, a prototype \textbf{GR}aph-based \textbf{I}nteractive narrative visualization system for ga\textbf{M}es, generates a rich narrative graph with branching storylines that match a high-level narrative description and constraints provided by the designer.

Investigating Agency of LLMs in Human-AI Collaboration Tasks

no code implementations22 May 2023 ASHISH SHARMA, Sudha Rao, Chris Brockett, Akanksha Malhotra, Nebojsa Jojic, Bill Dolan

While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these models should possess in order to proactively manage the direction of interaction and collaboration.

Grounded Keys-to-Text Generation: Towards Factual Open-Ended Generation

no code implementations4 Dec 2022 Faeze Brahman, Baolin Peng, Michel Galley, Sudha Rao, Bill Dolan, Snigdha Chaturvedi, Jianfeng Gao

We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages.

Data-to-Text Generation

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization

1 code implementation NAACL 2021 Yichen Jiang, Asli Celikyilmaz, Paul Smolensky, Paul Soulos, Sudha Rao, Hamid Palangi, Roland Fernandez, Caitlin Smith, Mohit Bansal, Jianfeng Gao

On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs.

Abstractive Text Summarization

GPT Perdetry Test: Generating new meanings for new words

no code implementations NAACL 2021 Nikolay Malkin, Sameera Lanka, Pranav Goel, Sudha Rao, Nebojsa Jojic

Human innovation in language, such as inventing new words, is a challenge for pretrained language models.

Ask what's missing and what's useful: Improving Clarification Question Generation using Global Knowledge

1 code implementation NAACL 2021 Bodhisattwa Prasad Majumder, Sudha Rao, Michel Galley, Julian McAuley

The ability to generate clarification questions i. e., questions that identify useful missing information in a given context, is important in reducing ambiguity.

Question Generation Question-Generation

Substance over Style: Document-Level Targeted Content Transfer

1 code implementation EMNLP 2020 Allison Hegel, Sudha Rao, Asli Celikyilmaz, Bill Dolan

Existing language models excel at writing from scratch, but many real-world scenarios require rewriting an existing document to fit a set of constraints.

Language Modelling Sentence +1

A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks

1 code implementation ACL 2020 Angela S. Lin, Sudha Rao, Asli Celikyilmaz, Elnaz Nouri, Chris Brockett, Debadeepta Dey, Bill Dolan

Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions.

Descriptive

Answer-based Adversarial Training for Generating Clarification Questions

1 code implementation NAACL 2019 Sudha Rao, Hal Daumé III

We present an approach for generating clarification questions with the goal of eliciting new information that would make the given textual context more complete.

Generative Adversarial Network Retrieval +1

Learning to Ask Good Questions: Ranking Clarification Questions using Neural Expected Value of Perfect Information

1 code implementation ACL 2018 Sudha Rao, Hal Daumé III

Inquiry is fundamental to communication, and machines cannot effectively collaborate with humans unless they can ask questions.

Towards Linguistically Generalizable NLP Systems: A Workshop and Shared Task

no code implementations WS 2017 Allyson Ettinger, Sudha Rao, Hal Daumé III, Emily M. Bender

This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task.

Biomedical Event Extraction using Abstract Meaning Representation

no code implementations WS 2017 Sudha Rao, Daniel Marcu, Kevin Knight, Hal Daum{\'e} III

We propose a novel, Abstract Meaning Representation (AMR) based approach to identifying molecular events/interactions in biomedical text.

Event Extraction

Parser for Abstract Meaning Representation using Learning to Search

no code implementations26 Oct 2015 Sudha Rao, Yogarshi Vyas, Hal Daume III, Philip Resnik

We develop a novel technique to parse English sentences into Abstract Meaning Representation (AMR) using SEARN, a Learning to Search approach, by modeling the concept and the relation learning in a unified framework.

Cannot find the paper you are looking for? You can Submit a new open access paper.