Program Synthesis

137 papers with code • 3 benchmarks • 5 datasets

Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.

Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.

Libraries

Use these libraries to find Program Synthesis models and implementations

Most implemented papers

HOUDINI: Lifelong Learning as Program Synthesis

capergroup/houdini NeurIPS 2018

We present a neurosymbolic framework for the lifelong learning of algorithmic tasks that mix perception and procedural reasoning.

Mapping Natural-language Problems to Formal-language Solutions Using Structured Neural Representations

ckzbullbullet/TP-N2F ICML 2020

The encoder of TP-N2F employs TPR `binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR `unbinding' to generate, in symbolic space, a sequential program represented by relational tuples, each consisting of a relation (or operation) and a number of arguments.

Improving Molecular Design by Stochastic Iterative Target Augmentation

yangkevin2/icml2020-stochastic-iterative-target-augmentation ICML 2020

The property predictor is then used as a likelihood model for filtering candidate structures from the generative model.

TF-Coder: Program Synthesis for Tensor Manipulations

google-research/tensorflow-coder NeurIPS Workshop CAP 2020

The success and popularity of deep learning is on the rise, partially due to powerful deep learning frameworks such as TensorFlow and PyTorch that make it easier to develop deep learning models.

Graph-based, Self-Supervised Program Repair from Diagnostic Feedback

michiyasunaga/DrRepair ICML 2020

Second, we present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online to create a large amount of extra program repair examples, which we use to pre-train our models.

Communicating Natural Programs to Humans and Machines

samacqua/LARC 15 Jun 2021

We present LARC, the \textit{Language-complete ARC}: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88\% of the ARC tasks.

CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning

salesforce/coderl 5 Jul 2022

To address the limitations, we propose "CodeRL", a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning (RL).

Learning programs with magic values

celinehocquette/magicpopper 5 Aug 2022

A magic value in a program is a constant symbol that is essential for the execution of the program but has no clear explanation for its choice.

Large Language Models Are Human-Level Prompt Engineers

keirp/automatic_prompt_engineer 3 Nov 2022

By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers.

CodeGen2: Lessons for Training LLMs on Programming and Natural Languages

salesforce/CodeGen 3 May 2023

In this study, we attempt to render the training of LLMs for program synthesis more efficient by unifying four key components: (1) model architectures, (2) learning methods, (3) infill sampling, and, (4) data distributions.