1 code implementation • ICML 2020 • Jiani Huang, Calvin Smith, Osbert Bastani, Rishabh Singh, Aws Albarghouthi, Mayur Naik
The policy neural network employs a program interpreter that provides immediate feedback on the consequences of the decisions made by the policy, and also takes into account the uncertainty in the symbolic representation of the image.
no code implementations • Findings (ACL) 2022 • Rishabh Singh, Shirin Goshtasbpour
Modern NLP classifiers are known to return uncalibrated estimations of class posteriors.
1 code implementation • 3 Feb 2023 • Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta
Training a model on a balanced corpus results in, on average, 12. 34% higher $pass@k$ across all tasks and languages compared to the baseline.
no code implementations • 3 Nov 2022 • Rishabh Singh, Jose C. Principe
Being based on the Gaussian RKHS, our approach is robust towards outliers and monotone transformations of data, while the multiple moments of uncertainty provide high resolution and interpretability of the type of dependence being quantified.
no code implementations • 3 Nov 2022 • Rishabh Singh, Jose C. Principe
We present a simple framework for high-resolution predictive uncertainty quantification of semantic segmentation models that leverages a multi-moment functional definition of uncertainty associated with the model's feature space in the reproducing kernel Hilbert space (RKHS).
no code implementations • NeurIPS 2021 • Shobha Vasudevan, Wenjie (Joe) Jiang, David Bieber, Rishabh Singh, hamid shojaei, C. Richard Ho, Charles Sutton
We evaluate Design2Vec on three real-world hardware designs, including an industrial chip used in commercial data centers.
no code implementations • 22 Sep 2021 • Rishabh Singh, Jose C. Principe
The RKHS projection of model weights yields a potential field based interpretation of model weight PDF which consequently allows the definition of a functional operator, inspired by perturbation theory in physics, that performs a moment decomposition of the model weight PDF (the potential field) at a specific model output to quantify its uncertainty.
no code implementations • 15 Aug 2021 • Sumit Kumar Varshney, Jeetu Kumar, Aditya Tiwari, Rishabh Singh, Venkata M. V. Gunturi, Narayanan C. Krishnan
Spatio-Temporal interpolation is highly challenging due to the complex spatial and temporal relationships.
1 code implementation • 26 Jun 2021 • Xinyun Chen, Petros Maniatis, Rishabh Singh, Charles Sutton, Hanjun Dai, Max Lin, Denny Zhou
In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data.
no code implementations • 19 May 2021 • Vaishali Ingale, Rishabh Singh, Pragati Patwal
Generation of maps from satellite images is conventionally done by a range of tools.
Generative Adversarial Network Image-to-Image Translation +1
no code implementations • 2 Mar 2021 • Rishabh Singh, Jose C. Principe
We therefore propose a framework for predictive uncertainty quantification of a trained neural network that explicitly estimates the PDF of its raw prediction space (before activation), p(y'|x, w), which we refer to as the model PDF, in a Gaussian reproducing kernel Hilbert space (RKHS).
no code implementations • 1 Dec 2020 • Joey Hong, David Dohan, Rishabh Singh, Charles Sutton, Manzil Zaheer
The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task.
no code implementations • NeurIPS 2020 • Hanjun Dai, Rishabh Singh, Bo Dai, Charles Sutton, Dale Schuurmans
In this paper we propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data, where parameter gradients are estimated using a learned sampler that mimics local search.
1 code implementation • 17 Sep 2020 • Prem Devanbu, Matthew Dwyer, Sebastian Elbaum, Michael Lowry, Kevin Moran, Denys Poshyvanyk, Baishakhi Ray, Rishabh Singh, Xiangyu Zhang
The intent of this report is to serve as a potential roadmap to guide future work that sits at the intersection of SE & DL.
no code implementations • ICLR 2021 • Augustus Odena, Kensen Shi, David Bieber, Rishabh Singh, Charles Sutton, Hanjun Dai
Program synthesis is challenging largely because of the difficulty of search in a large space of programs.
2 code implementations • ICLR 2021 • Subham Sekhar Sahoo, Subhashini Venugopalan, Li Li, Rishabh Singh, Patrick Riley
In this work, we propose a technique for combining gradient-based methods with symbolic techniques to scale such analyses and demonstrate its application for model explanation.
no code implementations • 19 Jun 2020 • Matej Balog, Rishabh Singh, Petros Maniatis, Charles Sutton
We present a new program synthesis approach that combines an encoder-decoder based synthesis architecture with a differentiable program fixer.
1 code implementation • ICLR 2020 • Vincent J. Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, David Bieber
By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.
2 code implementations • NeurIPS Workshop CAP 2020 • Kensen Shi, David Bieber, Rishabh Singh
The success and popularity of deep learning is on the rise, partially due to powerful deep learning frameworks such as TensorFlow and PyTorch that make it easier to develop deep learning models.
no code implementations • 27 Feb 2020 • Daniel A. Abolafia, Rishabh Singh, Manzil Zaheer, Charles Sutton
Main consists of a neural controller that interacts with a variable-length input tape and learns to compose modules together with their corresponding argument choices.
no code implementations • 30 Jan 2020 • Rishabh Singh, Jose C. Principe
This paper introduces a new framework for quantifying predictive uncertainty for both data and models that relies on projecting the data into a Gaussian reproducing kernel Hilbert space (RKHS) and transforming the data probability density function (PDF) in a way that quantifies the flow of its gradient as a topological potential field quantified at all points in the sample space.
no code implementations • ICLR 2019 • Richard Shin, Neel Kant, Kavi Gupta, Christopher Bender, Brandon Trabucco, Rishabh Singh, Dawn Song
The goal of program synthesis is to automatically generate programs in a particular language from corresponding specifications, e. g. input-output behavior.
no code implementations • NeurIPS 2019 • Hanjun Dai, Yujia Li, Chenglong Wang, Rishabh Singh, Po-Sen Huang, Pushmeet Kohli
We propose a `learning to explore' framework where we learn a policy from a distribution of environments.
no code implementations • 25 Sep 2019 • Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid
We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.
2 code implementations • 16 Jun 2019 • Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid
By training Mo\"ET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models.
2 code implementations • ICLR 2019 • Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh
We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.
1 code implementation • 23 Jan 2019 • Li Li, Minjie Fan, Rishabh Singh, Patrick Riley
The second part, which we call Neural-Guided Monte Carlo Tree Search, uses the network during a search to find an expression that conforms to a set of data points and desired leading powers.
1 code implementation • 9 Jul 2018 • Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Oleksandr Polozov, Rishabh Singh
We consider the problem of neural semantic parsing, which translates natural language questions into executable SQL queries.
no code implementations • 1 Jul 2018 • Surya Bhupatiraju, Kumar Krishna Agrawal, Rishabh Singh
Deep reinforcement learning has led to several recent breakthroughs, though the learned policies are often based on black-box neural networks.
no code implementations • ICLR 2018 • Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli
Program synthesis is the task of automatically generating a program consistent with a specification.
no code implementations • ICML 2018 • Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri
Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language.
no code implementations • 10 Mar 2018 • Roland Fernandez, Asli Celikyilmaz, Rishabh Singh, Paul Smolensky
We present a formal language with expressions denoting general symbol structures and queries which access information in those structures.
1 code implementation • NAACL 2018 • Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, Xiaodong He
In conventional supervised training, a model is trained to fit all the training examples.
Ranked #7 on Code Generation on WikiSQL
no code implementations • NeurIPS 2018 • Xin Zhang, Armando Solar-Lezama, Rishabh Singh
We argue that such a correction is a useful way to provide feedback to a user when the network's output is different from a desired output.
no code implementations • 14 Jan 2018 • Konstantin Böttinger, Patrice Godefroid, Rishabh Singh
Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs.
no code implementations • ICLR 2018 • Chenglong Wang, Marc Brockschmidt, Rishabh Singh
We present a system that allows for querying data tables using natural language questions, where the system translates the question into an executable SQL query.
no code implementations • ICLR 2018 • Ke Wang, Rishabh Singh, Zhendong Su
Our evaluation results show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees.
no code implementations • 29 Nov 2017 • Rajeev Alur, Dana Fisman, Rishabh Singh, Armando Solar-Lezama
Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula phi in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations.
1 code implementation • 20 Nov 2017 • Ke Wang, Rishabh Singh, Zhendong Su
Evaluation results show that our new semantic program embedding significantly outperforms the syntactic program embeddings based on token sequences and abstract syntax trees.
no code implementations • 10 Nov 2017 • Mohit Rajpal, William Blum, Rishabh Singh
Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software.
no code implementations • ICLR 2018 • Jacob Devlin, Jonathan Uesato, Rishabh Singh, Pushmeet Kohli
We study the problem of semantic code repair, which can be broadly defined as automatically fixing non-syntactic bugs in source code.
no code implementations • NeurIPS 2017 • Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli
In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning.
no code implementations • 14 Apr 2017 • Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli
We then present a novel neural synthesis algorithm to search for programs in the DSL that are consistent with a given set of examples.
3 code implementations • ICML 2017 • Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli
Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation.
1 code implementation • 25 Jan 2017 • Patrice Godefroid, Hila Peleg, Rishabh Singh
Fuzzing consists of repeatedly testing an application with modified, or fuzzed, inputs with the goal of finding security vulnerabilities in input-parsing code.
no code implementations • 2 Dec 2016 • Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow
A TerpreT model is composed of a specification of a program representation and an interpreter that describes how programs map inputs to outputs.
no code implementations • 23 Nov 2016 • Rajeev Alur, Dana Fisman, Rishabh Singh, Armando Solar-Lezama
Syntax-Guided Synthesis (SyGuS) is the computational problem of finding an implementation f that meets both a semantic constraint given by a logical formula $\varphi$ in a background theory T, and a syntactic constraint given by a grammar G, which specifies the allowed set of candidate implementations.
no code implementations • 6 Nov 2016 • Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, Pushmeet Kohli
While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network).
no code implementations • 15 Aug 2016 • Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow
TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations).
no code implementations • 19 Mar 2016 • Sahil Bhatia, Rishabh Singh
We present a technique for providing feedback on syntax errors that uses Recurrent neural networks (RNNs) to model syntactically valid token sequences.