Program Synthesis
138 papers with code • 3 benchmarks • 5 datasets
Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.
Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.
Libraries
Use these libraries to find Program Synthesis models and implementationsSubtasks
Latest papers with no code
Fewer Truncations Improve Language Modeling
In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens.
Self-Training Large Language Models for Improved Visual Program Synthesis With Visual Reinforcement
We propose a method where we exploit existing annotations for a vision-language task to improvise a coarse reward signal for that task, treat the LLM as a policy, and apply reinforced self-training to improve the visual program synthesis ability of the LLM for that task.
Synapse: Learning Preferential Concepts from Visual Demonstrations
This paper addresses the problem of preference learning, which aims to learn user-specific preferences (e. g., "good parking spot", "convenient drop-off location") from visual input.
Guiding Enumerative Program Synthesis with Large Language Models
In this paper, we evaluate the abilities of LLMs to solve formal synthesis benchmarks by carefully crafting a library of prompts for the domain.
Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models
Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).
Enforcing Temporal Constraints on Generative Agent Behavior with Reactive Synthesis
Our approach uses Temporal Stream Logic (TSL) to generate an automaton that enforces a temporal structure on an agent and leaves the details of each action for a moment in time to an LLM.
Origami: (un)folding the abstraction of recursion schemes for program synthesis
Program synthesis with Genetic Programming searches for a correct program that satisfies the input specification, which is usually provided as input-output examples.
LTL learning on GPUs
Linear temporal logic (LTL) is widely used in industrial verification.
WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment
We give a model-based agent that builds a Python program representing its knowledge of the world based on its interactions with the environment.
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay
Our method iterates between 1) program sampling and hindsight relabeling, and 2) learning from prioritized experience replay.