no code implementations • ACL 2022 • Anton Belyy, Chieh-Yang Huang, Jacob Andreas, Emmanouil Antonios Platanios, Sam Thomson, Richard Shin, Subhro Roy, Aleksandr Nisnevich, Charles Chen, Benjamin Van Durme
Collecting data for conversational semantic parsing is a time-consuming and demanding process.
1 code implementation • 15 Nov 2023 • Nicholas Farn, Richard Shin
Large language models (LLMs) have displayed massive improvements in reasoning and decision-making skills and can hold natural conversations with users.
1 code implementation • 21 Sep 2023 • Xinyu Tang, Richard Shin, Huseyin A. Inan, Andre Manoel, FatemehSadat Mireshghallah, Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Robert Sim
Our results demonstrate that our algorithm can achieve competitive performance with strong privacy levels.
no code implementations • 20 Dec 2022 • FatemehSadat Mireshghallah, Yu Su, Tatsunori Hashimoto, Jason Eisner, Richard Shin
Task-oriented dialogue systems often assist users with personal or confidential matters.
1 code implementation • NeurIPS 2023 • Subhro Roy, Sam Thomson, Tongfei Chen, Richard Shin, Adam Pauls, Jason Eisner, Benjamin Van Durme
We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Model Parsing, that includes context-free grammars for seven semantic parsing datasets and two syntactic parsing datasets with varied output representations, as well as a constrained decoding interface to generate only valid outputs covered by these grammars.
no code implementations • Findings (ACL) 2022 • Kevin Yang, Olivia Deng, Charles Chen, Richard Shin, Subhro Roy, Benjamin Van Durme
We introduce a novel setup for low-resource task-oriented semantic parsing which incorporates several constraints that may arise in real-world scenarios: (1) lack of similar datasets/models from a related domain, (2) inability to sample useful logical forms directly from a grammar, and (3) privacy requirements for unlabeled natural utterances.
no code implementations • NAACL 2022 • Richard Shin, Benjamin Van Durme
Intuitively, such models can more easily output canonical utterances as they are closer to the natural language used for pre-training.
no code implementations • 10 Dec 2021 • Patrick Xia, Richard Shin
The sizes of pretrained language models make them challenging and expensive to use when there are multiple desired downstream tasks.
1 code implementation • EMNLP 2021 • Richard Shin, Christopher H. Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, Benjamin Van Durme
We explore the use of large pretrained language models as few-shot semantic parsers.
1 code implementation • 29 Dec 2019 • Roy Fox, Richard Shin, William Paul, Yitian Zou, Dawn Song, Ken Goldberg, Pieter Abbeel, Ion Stoica
Autonomous agents can learn by imitating teacher demonstrations of the intended behavior.
no code implementations • ICLR 2019 • Richard Shin, Neel Kant, Kavi Gupta, Christopher Bender, Brandon Trabucco, Rishabh Singh, Dawn Song
The goal of program synthesis is to automatically generate programs in a particular language from corresponding specifications, e. g. input-output behavior.
4 code implementations • ACL 2020 • Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, Matthew Richardson
The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query.
Ranked #9 on Semantic Parsing on spider
1 code implementation • 27 Jun 2019 • Richard Shin
When translating natural language questions into SQL queries to answer questions from a database, we would like our methods to generalize to domains and database schemas outside of the training set.
1 code implementation • NeurIPS 2019 • Richard Shin, Miltiadis Allamanis, Marc Brockschmidt, Oleksandr Polozov
Program synthesis of general-purpose source code from natural language specifications is challenging due to the need to reason about high-level patterns in the target program and low-level implementation details at the same time.
no code implementations • NeurIPS 2018 • Richard Shin, Illia Polosukhin, Dawn Song
The task of program synthesis, or automatically generating programs that are consistent with a provided specification, remains a challenging task in artificial intelligence.
no code implementations • ICLR 2018 • Roy Fox, Richard Shin, Sanjay Krishnan, Ken Goldberg, Dawn Song, Ion Stoica
Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism.
no code implementations • ICLR 2018 • Richard Shin, Dawn Song
Recent work has shown that it is possible to address these issues by using recursion in the Neural Programmer-Interpreter, but this technique requires a verification set which is difficult to construct without knowledge of the internals of the oracle used to generate training data.
no code implementations • NIPS 2017 Workshop on Machine Learning and Computer Security 2017 • Richard Shin, Dawn Song
Several papers have explored the use of JPEG compression as a defense against adversarial images.
no code implementations • 21 Apr 2017 • Jonathon Cai, Richard Shin, Dawn Song
Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability.
no code implementations • NeurIPS 2016 • Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, Mingcheng Chen
Automatic translation from natural language descriptions into programs is a longstanding challenging problem.