Program Synthesis
137 papers with code • 3 benchmarks • 5 datasets
Program synthesis is the process of automatically generating a program or code snippet that satisfies a given specification or set of requirements. This can include generating code from a formal specification, a natural language description, or example inputs and outputs. The primary goal of program synthesis is to minimize human intervention in the coding process, reduce errors, and improve productivity.
Program synthesis often involves the use of advanced algorithms, artificial intelligence, and machine learning techniques to search the space of possible programs that meet the given constraints. This process can be guided by a variety of techniques, such as constraint solving, symbolic execution, and genetic algorithms.
Libraries
Use these libraries to find Program Synthesis models and implementationsSubtasks
Latest papers
WatChat: Explaining perplexing programs by debugging mental models
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code.
Constrained Decoding for Code Language Models via Efficient Left and Right Quotienting of Context-Sensitive Grammars
Large Language Models are powerful tools for program synthesis and advanced auto-completion, but come with no guarantee that their output code is syntactically correct.
HumanEval on Latest GPT Models -- 2024
In 2023, we are using the latest models of GPT-4 to advance program synthesis.
SwissNYF: Tool Grounded LLM Agents for Black Box Setting
Therefore, we harness the program synthesis capabilities of LLMs to strategize tool usage in black-box settings, ensuring solutions are verified prior to implementation.
Pix2Code: Learning to Compose Neural Visual Concepts as Programs
The challenge in learning abstract concepts from images in an unsupervised fashion lies in the required integration of visual perception and generalizable relational reasoning.
Opening the AI black box: program synthesis via mechanistic interpretability
We present MIPS, a novel method for program synthesis based on automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code.
ReGAL: Refactoring Programs to Discover Generalizable Abstractions
While large language models (LLMs) are increasingly being used for program synthesis, they lack the global view needed to develop useful abstractions; they generally predict programs one at a time, often repeating the same functionality.
DALex: Lexicase-like Selection via Diverse Aggregation
Lexicase selection has been shown to provide advantages over other selection algorithms in several areas of evolutionary computation and machine learning.
Analyzing the Effectiveness of Large Language Models on Text-to-SQL Synthesis
This study investigates various approaches to using Large Language Models (LLMs) for Text-to-SQL program synthesis, focusing on the outcomes and insights derived.
CodeScholar: Growing Idiomatic Code Examples
A tool that could generate realistic, idiomatic, and contextual usage examples for one or more APIs would be immensely beneficial to developers.