Search Results for author: Sarah Fakhoury

Found 5 papers, 2 papers with code

Ranking LLM-Generated Loop Invariants for Program Verification

1 code implementation13 Oct 2023 Saikat Chakraborty, Shuvendu K. Lahiri, Sarah Fakhoury, Madanlal Musuvathi, Akash Lal, Aseem Rastogi, Aditya Senthilnathan, Rahul Sharma, Nikhil Swamy

In this work, we observe that Large Language Models (such as gpt-3. 5 or gpt-4) are capable of synthesizing loop invariants for a class of programs in a 0-shot setting, yet require several samples to generate the correct invariants.

Re-Ranking

Can Large Language Models Transform Natural Language Intent into Formal Method Postconditions?

no code implementations3 Oct 2023 Madeline Endres, Sarah Fakhoury, Saikat Chakraborty, Shuvendu K. Lahiri

The emergent abilities of Large Language Models (LLMs) have the potential to facilitate the translation of natural language intent to programmatically checkable assertions.

Fault localization Translation

Towards Generating Functionally Correct Code Edits from Natural Language Issue Descriptions

no code implementations7 Apr 2023 Sarah Fakhoury, Saikat Chakraborty, Madan Musuvathi, Shuvendu K. Lahiri

Several benchmarks have recently emerged to evaluate the ability of LLMs to generate functionally correct code from natural language intent with respect to a set of hidden test cases.

Interactive Code Generation via Test-Driven User-Intent Formalization

no code implementations11 Aug 2022 Shuvendu K. Lahiri, Sarah Fakhoury, Aaditya Naik, Georgios Sakkas, Saikat Chakraborty, Madanlal Musuvathi, Piali Choudhury, Curtis von Veh, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao

Large language models (LLMs) have shown great potential in automating significant aspects of coding by producing natural code from informal natural language (NL) intent.

Code Generation

Program Merge Conflict Resolution via Neural Transformers

1 code implementation31 Aug 2021 Alexey Svyatkovskiy, Sarah Fakhoury, Negar Ghorbani, Todd Mytkowicz, Elizabeth Dinella, Christian Bird, Jinu Jang, Neel Sundaresan, Shuvendu Lahiri

Our model achieves 63-68% accuracy for merge resolution synthesis, yielding nearly a 3x performance improvement over existing semi-structured, and 2x improvement over neural program merge tools.

Cannot find the paper you are looking for? You can Submit a new open access paper.