Code Generation
331 papers with code • 15 benchmarks • 43 datasets
Code Generation is an important field to predict explicit code or program structure from multimodal data sources such as incomplete code, programs in another programming language, natural language descriptions or execution examples. Code Generation tools can assist the development of automatic programming tools to improve programming productivity.
Source: Deep Learning for Source Code Modeling and Generation
Image source: Measuring Coding Challenge Competence With APPS
Libraries
Use these libraries to find Code Generation models and implementationsSubtasks
Latest papers with no code
Beyond Code Generation: An Observational Study of ChatGPT Usage in Software Engineering Practice
Large Language Models (LLMs) are frequently discussed in academia and the general public as support tools for virtually any use case that relies on the production of text, including software engineering.
Assessing GPT-4-Vision's Capabilities in UML-Based Code Generation
On average, the model was able to generate source code for 88% of the elements shown in the diagrams.
Large Language Models as Test Case Generators: Performance Evaluation and Enhancement
As a complementary aspect to code generation, test case generation is of crucial importance in ensuring the quality and reliability of code.
Low-Cost Language Models: Survey and Performance Evaluation on Python Code Generation
Large Language Models (LLMs) have become the go-to solution for many Natural Language Processing (NLP) tasks due to their ability to tackle various problems and produce high-quality results.
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
However, in academic benchmarks, state-of-the-art results are often achieved via reward-free methods, such as Direct Preference Optimization (DPO).
Quality Assessment of Prompts Used in Code Generation
We found that code generation evaluation benchmarks mainly focused on Python and coding exercises and had very limited contextual dependencies to challenge the model.
Test Code Generation for Telecom Software Systems using Two-Stage Generative Model
In recent years, the evolution of Telecom towards achieving intelligent, autonomous, and open networks has led to an increasingly complex Telecom Software system, supporting various heterogeneous deployment scenarios, with multi-standard and multi-vendor support.
CreativEval: Evaluating Creativity of LLM-Based Hardware Code Generation
Large Language Models (LLMs) have proved effective and efficient in generating code, leading to their utilization within the hardware design process.
A Multi-Expert Large Language Model Architecture for Verilog Code Generation
Recently, there has been a surging interest in using large language models (LLMs) for Verilog code generation.
Register Your Forests: Decision Tree Ensemble Optimization by Explicit CPU Register Allocation
Extensive evaluations of the proposed method are conducted in comparison to the basic realization of C code from the high-level machine learning model and succeeding compilation.