Contextual Embedding for Source Code
2 papers with code • 0 benchmarks • 3 datasets
Benchmarks
These leaderboards are used to track progress in Contextual Embedding for Source Code
Most implemented papers
Learning and Evaluating Contextual Embedding of Source Code
We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.
CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing
Simultaneously, the transformer model, especially its combination with transfer learning, has been proven to be a powerful technique for natural language processing tasks.