1 code implementation • NAACL (Wordplay) 2022 • Ryan Volum, Sudha Rao, Michael Xu, Gabriel DesGarennes, Chris Brockett, Benjamin Van Durme, Olivia Deng, Akanksha Malhotra, Bill Dolan
In this work, we demonstrate that use of a few example conversational prompts can power a conversational agent to generate both natural language and novel code.
no code implementations • 15 Nov 2023 • Jorge Leandro, Sudha Rao, Michael Xu, Weijia Xu, Nebosja Jojic, Chris Brockett, Bill Dolan
\textbf{GRIM}, a prototype \textbf{GR}aph-based \textbf{I}nteractive narrative visualization system for ga\textbf{M}es, generates a rich narrative graph with branching storylines that match a high-level narrative description and constraints provided by the designer.
no code implementations • 22 May 2023 • ASHISH SHARMA, Sudha Rao, Chris Brockett, Akanksha Malhotra, Nebojsa Jojic, Bill Dolan
While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these models should possess in order to proactively manage the direction of interaction and collaboration.
no code implementations • 4 Dec 2022 • Faeze Brahman, Baolin Peng, Michel Galley, Sudha Rao, Bill Dolan, Snigdha Chaturvedi, Jianfeng Gao
We propose a new grounded keys-to-text generation task: the task is to generate a factual description about an entity given a set of guiding keys, and grounding passages.
no code implementations • MTSummit 2021 • Paul Soulos, Sudha Rao, Caitlin Smith, Eric Rosen, Asli Celikyilmaz, R. Thomas McCoy, Yichen Jiang, Coleman Haley, Roland Fernandez, Hamid Palangi, Jianfeng Gao, Paul Smolensky
Machine translation has seen rapid progress with the advent of Transformer-based models.
1 code implementation • NAACL 2021 • Yichen Jiang, Asli Celikyilmaz, Paul Smolensky, Paul Soulos, Sudha Rao, Hamid Palangi, Roland Fernandez, Caitlin Smith, Mohit Bansal, Jianfeng Gao
On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs.
no code implementations • NAACL 2021 • Nikolay Malkin, Sameera Lanka, Pranav Goel, Sudha Rao, Nebojsa Jojic
Human innovation in language, such as inventing new words, is a challenge for pretrained language models.
1 code implementation • NAACL 2021 • Bodhisattwa Prasad Majumder, Sudha Rao, Michel Galley, Julian McAuley
The ability to generate clarification questions i. e., questions that identify useful missing information in a given context, is important in reducing ambiguity.
1 code implementation • 18 Nov 2020 • Hassan Akbari, Hamid Palangi, Jianwei Yang, Sudha Rao, Asli Celikyilmaz, Roland Fernandez, Paul Smolensky, Jianfeng Gao, Shih-Fu Chang
In this paper, we propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
1 code implementation • EMNLP 2020 • Allison Hegel, Sudha Rao, Asli Celikyilmaz, Bill Dolan
Existing language models excel at writing from scratch, but many real-world scenarios require rewriting an existing document to fit a set of constraints.
1 code implementation • ACL 2020 • Angela S. Lin, Sudha Rao, Asli Celikyilmaz, Elnaz Nouri, Chris Brockett, Debadeepta Dey, Bill Dolan
Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions.
1 code implementation • EACL 2021 • Woon Sang Cho, Yizhe Zhang, Sudha Rao, Asli Celikyilmaz, Chenyan Xiong, Jianfeng Gao, Mengdi Wang, Bill Dolan
In the SL stage, a single-document question generator is trained.
no code implementations • IJCNLP 2019 • Elissa Redmiles, Lisa Maszkiewicz, Emily Hwang, Dhruv Kuchhal, Everest Liu, Miraida Morales, Denis Peskov, Sudha Rao, Rock Stevens, Kristina Gligori{\'c}, Sean Kross, Michelle Mazurek, Hal Daum{\'e} III
The readability of a digital text can influence people{'}s ability to learn new things about a range topics from digital resources (e. g., Wikipedia, WebMD).
no code implementations • WS 2019 • Woon Sang Cho, Yizhe Zhang, Sudha Rao, Chris Brockett, Sungjin Lee
A preliminary step towards this goal is to generate a question that captures common concepts of multiple documents.
no code implementations • WS 2019 • Yang Trista Cao, Sudha Rao, Hal Daum{\'e} III
Unlike comprehension-style questions, clarification questions look for some missing information in a given context.
1 code implementation • NAACL 2019 • Sudha Rao, Hal Daumé III
We present an approach for generating clarification questions with the goal of eliciting new information that would make the given textual context more complete.
1 code implementation • COLING 2018 • Xing Niu, Sudha Rao, Marine Carpuat
Generating natural language requires conveying content in an appropriate style.
1 code implementation • ACL 2018 • Sudha Rao, Hal Daumé III
Inquiry is fundamental to communication, and machines cannot effectively collaborate with humans unless they can ask questions.
1 code implementation • NAACL 2018 • Sudha Rao, Joel Tetreault
Style transfer is the task of automatically transforming a piece of text in one particular style into another.
no code implementations • WS 2017 • Allyson Ettinger, Sudha Rao, Hal Daumé III, Emily M. Bender
This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task.
no code implementations • WS 2017 • Sudha Rao, Daniel Marcu, Kevin Knight, Hal Daum{\'e} III
We propose a novel, Abstract Meaning Representation (AMR) based approach to identifying molecular events/interactions in biomedical text.
no code implementations • 26 Oct 2015 • Sudha Rao, Yogarshi Vyas, Hal Daume III, Philip Resnik
We develop a novel technique to parse English sentences into Abstract Meaning Representation (AMR) using SEARN, a Learning to Search approach, by modeling the concept and the relation learning in a unified framework.