Search Results for author: Peng Liang

Found 10 papers, 4 papers with code

On Unified Prompt Tuning for Request Quality Assurance in Public Code Review

no code implementations11 Apr 2024 Xinyu Chen, Lin Li, Rui Zhang, Peng Liang

Public Code Review (PCR) can be implemented through a Software Question Answering (SQA) community, which facilitates high knowledge dissemination.

Language Modelling Question Answering

Security Code Review by LLMs: A Deep Dive into Responses

no code implementations29 Jan 2024 Jiaxin Yu, Peng Liang, Yujia Fu, Amjed Tahir, Mojtaba Shahin, Chong Wang, Yangxiao Cai

To explore the challenges of applying LLMs in practical code review for security defect detection, this study compared the detection performance of three state-of-the-art LLMs (Gemini Pro, GPT-4, and GPT-3. 5) under five prompts on 549 code files that contain security defects from real-world code reviews.

Defect Detection

Copilot Refinement: Addressing Code Smells in Copilot-Generated Python Code

no code implementations25 Jan 2024 Beiqi Zhang, Peng Liang, Qiong Feng, Yujia Fu, Zengyang Li

The results show that 8 out of 10 types of Python smells can be detected in Copilot-generated Python code, among which Multiply-Nested Container is the most common one.

Code Generation

A Study of Fairness Concerns in AI-based Mobile App Reviews

no code implementations16 Jan 2024 Ali Rezaei Nasab, Maedeh Dashti, Mojtaba Shahin, Mansooreh Zahedi, Hourieh Khalajzadeh, Chetan Arora, Peng Liang

Finally, the manual analysis of 2, 248 app owners' responses to the fairness reviews identified six root causes (e. g., 'copyright issues') that app owners report to justify fairness concerns.

Fairness

An Exploratory Study on Automatic Identification of Assumptions in the Development of Deep Learning Frameworks

1 code implementation8 Jan 2024 Chen Yang, Peng Liang, Zinan Ma

To overcome the issues of manually identifying assumptions in DL framework development, we constructed a new and largest dataset (i. e., AssuEval) of assumptions collected from the TensorFlow and Keras repositories on GitHub; explored the performance of seven traditional machine learning models (e. g., Support Vector Machine, Classification and Regression Trees), a popular DL model (i. e., ALBERT), and a large language model (i. e., ChatGPT) of identifying assumptions on the AssuEval dataset.

Language Modelling Large Language Model

DCANet: Dual Convolutional Neural Network with Attention for Image Blind Denoising

1 code implementation4 Apr 2023 Wencong Wu, Guannan Lv, Yingying Duan, Peng Liang, Yungang Zhang, Yuelong Xia

In this paper, we present a new dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet.

Image Denoising Noise Estimation

Understanding Bugs in Multi-Language Deep Learning Frameworks

no code implementations5 Mar 2023 Zengyang Li, Sicheng Wang, Wenshuo Wang, Peng Liang, Ran Mo, Bing Li

Third, we found that 28. 6%, 31. 4%, and 16. 0% of bugs in MXNet, PyTorch, and TensorFlow are MPL bugs, respectively; the PL combination of Python and C/C++ is most used in fixing more than 92% MPL bugs in all DLFs.

Cannot find the paper you are looking for? You can Submit a new open access paper.