Variable misuse

9 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Variable misuse models and implementations

Most implemented papers

Neural Program Repair by Jointly Learning to Localize and Repair

mdrafiqulrabin/SIVAND ICLR 2019

We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.

Learning and Evaluating Contextual Embedding of Source Code

google-research/google-research ICML 2020

We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.

Understanding Neural Code Intelligence Through Program Simplification

mdrafiqulrabin/SIVAND 7 Jun 2021

Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model.

Memorization and Generalization in Neural Code Intelligence Models

uh-serg/ci-memorization 16 Jun 2021

The goal of this paper is to evaluate and compare the extent of memorization and generalization in neural code intelligence models.

Global Relational Models of Source Code

VHellendoorn/ICLR20-Great ICLR 2020

By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.

Learning Graph Structure With A Finite-State Automaton Layer

google-research/google-research NeurIPS 2020

In practice, edges are used both to represent intrinsic structure (e. g., abstract syntax trees of programs) and more abstract relations that aid reasoning for a downstream task (e. g., results of relevant program analyses).

CodeTrek: Flexible Modeling of Code using an Extensible Relational Representation

ppashakhanloo/CodeTrek ICLR 2022

Designing a suitable representation for code-reasoning tasks is challenging in aspects such as the kinds of program information to model, how to combine them, and how much context to consider.

Graph Conditioned Sparse-Attention for Improved Source Code Understanding

chengjunyan1/graph-sparse-transformer 1 Dec 2021

The fusion between a graph representation like Abstract Syntax Tree (AST) and a source code sequence makes the use of current approaches computationally intractable for large input sequence lengths.

Probing Pretrained Models of Source Code

serjtroshin/probings4code 16 Feb 2022

Deep learning models are widely used for solving challenging code processing tasks, such as code generation or code summarization.