Program Repair

34 papers with code • 3 benchmarks • 8 datasets

Task of teaching ML models to modify an existing program to fix a bug in a given code.

Latest papers with no code

Enhancing Genetic Improvement Mutations Using Large Language Models

no code yet • 18 Oct 2023

We find that the number of patches passing unit tests is up to 75% higher with LLM-based edits than with standard Insert edits.

Automated Bug Generation in the era of Large Language Models

no code yet • 3 Oct 2023

From the classic software engineering point of view, a hard-to-repair bug differs from the correct code in multiple locations, making it hard to localize and repair.

Program Repair with Minimal Edits Using CodeT5

no code yet • 26 Sep 2023

The experimental results show that the fine-tuned CodeT5 achieves a pass@100 of 91. 95% and an average edit distance of the most similar correct program of 6. 84, which indicates that at least one correct program can be suggested by generating 100 candidate programs.

Frustrated with Code Quality Issues? LLMs can Help!

no code yet • 22 Sep 2023

We present a tool, CORE (short for COde REvisions), architected using a pair of LLMs organized as a duo comprised of a proposer and a ranker.

RAP-Gen: Retrieval-Augmented Patch Generation with CodeT5 for Automatic Program Repair

no code yet • 12 Sep 2023

Automatic program repair (APR) is crucial to reduce manual debugging efforts for developers and improve software reliability.

An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code

no code yet • 5 Jul 2023

Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair.

Better patching using LLM prompting, via Self-Consistency

no code yet • 31 May 2023

Large Language models (LLMs) can be induced to solve non-trivial problems with "few-shot" prompts including illustrative problem-solution examples.

Is ChatGPT the Ultimate Programming Assistant -- How far is it?

no code yet • 24 Apr 2023

To assess the feasibility of using an LLM as a useful assistant bot for programmers, we must assess its realistic capabilities on unseen problems as well as its capabilities on various tasks.

Fully Autonomous Programming with Large Language Models

no code yet • 20 Apr 2023

Current approaches to program synthesis with Large Language Models (LLMs) exhibit a "near miss syndrome": they tend to generate programs that semantically resemble the correct answer (as measured by text similarity metrics or human evaluation), but achieve a low or even zero accuracy as measured by unit tests due to small imperfections, such as the wrong input or output format.

Enhancing Automated Program Repair through Fine-tuning and Prompt Engineering

no code yet • 16 Apr 2023

We applied PLBART and CodeT5, two state-of-the-art language models that are pre-trained with both PL and NL, on two such natural language-based program repair datasets and found that the pre-trained language models fine-tuned with datasets containing both code review and subsequent code changes notably outperformed each of the previous models.