Search Results for author: Fatemeh H. Fard

Found 6 papers, 2 papers with code

Studying Vulnerable Code Entities in R

1 code implementation6 Feb 2024 Zixiao Zhao, Millon Madhur Das, Fatemeh H. Fard

Pre-trained Code Language Models (Code-PLMs) have shown many advancements and achieved state-of-the-art results for many software engineering tasks in the past few years.

Code Summarization Method name prediction

On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules

1 code implementation19 Apr 2022 Divyam Goel, Ramansh Grover, Fatemeh H. Fard

Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models' parameters -- which owes to the adapters' plug and play nature and being parameter efficient -- their usage in software engineering is not explored.

Clone Detection Cloze Test +1

On the Effectiveness of Pretrained Models for API Learning

no code implementations5 Apr 2022 Mohammad Abdul Hadi, Imam Nur Bani Yusuf, Ferdian Thung, Kien Gia Luong, Jiang Lingxiao, Fatemeh H. Fard, David Lo

We have also identified two different tokenization approaches that can contribute to a significant boost in PTMs' performance for the API sequence generation task.

Information Retrieval Language Modelling +2

API2Com: On the Improvement of Automatically Generated Code Comments Using API Documentations

no code implementations19 Mar 2021 Ramin Shahbazi, Rishab Sharma, Fatemeh H. Fard

However, as the number of APIs that are used in a method increases, the performance of the model in generating comments decreases due to long documentations used in the input.

Comment Generation Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.