Program Synthesis
139 papers with code • 3 benchmarks • 5 datasets
Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.
Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.
Libraries
Use these libraries to find Program Synthesis models and implementationsSubtasks
Latest papers
ChatGPT for GTFS: Benchmarking LLMs on GTFS Understanding and Retrieval
The idea of this research is to see if the current widely adopted LLMs (ChatGPT) are able to understand GTFS and retrieve information from GTFS using natural language instructions without explicitly providing information.
GRAN is superior to GraphRNN: node orderings, kernel- and graph embeddings-based metrics for graph generators
We use these metrics to compare GraphRNN and GRAN, two well-known generative models for graphs, and unveil the influence of node orderings.
RLTF: Reinforcement Learning from Unit Test Feedback
The goal of program synthesis, or code generation, is to generate executable code based on given descriptions.
Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations
Aging societies, labor shortages and increasing wage costs call for assistance robots capable of autonomously performing a wide array of real-world tasks.
LambdaBeam: Neural Program Search with Higher-Order Functions and Lambdas
Search is an important technique in program synthesis that allows for adaptive strategies such as focusing on particular search directions based on execution results.
ANPL: Towards Natural Programming with Interactive Decomposition
We deploy ANPL on the Abstraction and Reasoning Corpus (ARC), a set of unique tasks that are challenging for state-of-the-art AI systems, showing it outperforms baseline programming systems that (a) without the ability to decompose tasks interactively and (b) without the guarantee that the modules can be correctly composed together.
Search-Based Regular Expression Inference on a GPU
Our main algorithmic idea is to implement the search space of regular expressions succinctly as a contiguous matrix of bitvectors.
Gorilla: Large Language Model Connected with Massive APIs
Large Language Models (LLMs) have seen an impressive wave of advances recently, with models now excelling in a variety of tasks, such as mathematical reasoning and program synthesis.
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing
Unlike these models, humans typically utilize external tools to cross-check and refine their initial content, like using a search engine for fact-checking, or a code interpreter for debugging.
Probabilistic Lexicase Selection
Lexicase selection is a widely used parent selection algorithm in genetic programming, known for its success in various task domains such as program synthesis, symbolic regression, and machine learning.