1 code implementation • 6 Mar 2024 • Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen
Distillation aims to distill knowledge from a given model (a. k. a the teacher net) and transfer it to another (a. k. a the student net).
1 code implementation • 26 Dec 2023 • Weisong Sun, Chunrong Fang, Yudu You, Yuchen Chen, Yi Liu, Chong Wang, Jian Zhang, Quanjun Zhang, Hanwei Qian, Wei Zhao, Yang Liu, Zhenyu Chen
PromptCS trains a prompt agent that can generate continuous prompts to unleash the potential for LLMs in code summarization.
1 code implementation • 5 Dec 2023 • Xiaoqi Zhao, Youwei Pang, Zhenyu Chen, Qian Yu, Lihe Zhang, Hanqi Liu, Jiaming Zuo, Huchuan Lu
We conduct a comprehensive study on a new task named power battery detection (PBD), which aims to localize the dense cathode and anode plates endpoints from X-ray images to evaluate the quality of power batteries.
1 code implementation • 1 Dec 2023 • Weisong Sun, Chunrong Fang, Yun Miao, Yudu You, Mengzhe Yuan, Yuchen Chen, Quanjun Zhang, An Guo, Xiang Chen, Yang Liu, Zhenyu Chen
To do so, we compare the performance of models trained with code token sequence (Token for short) based code representation and AST-based code representation on three popular types of code-related tasks.
1 code implementation • 10 Nov 2023 • Zixiang Xian, Rubing Huang, Dave Towey, Chunrong Fang, Zhenyu Chen
Our framework has several advantages over existing methods: (1) It is flexible and adaptable, because it can easily be extended to other downstream tasks that require code representation (such as code-clone detection and classification); (2) it is efficient and scalable, because it does not require a large model or a large amount of training data, and it can support any programming language; (3) it is not limited to unsupervised learning, but can also be applied to some supervised learning tasks by incorporating task-specific labels or objectives; and (4) it can also adjust the number of encoder parameters based on computing resources.
no code implementations • 6 Nov 2023 • Fuyun Wang, Xingyu Gao, Zhenyu Chen, Lei Lyu
CM-GNN further introduces an attention-based fusion module to learn pairwise relation-based session representation by fusing the item representations generated by L-GCN and G-GCN.
1 code implementation • 26 Jul 2023 • Jiawen Zhu, Zhenyu Chen, Zeqi Hao, Shijie Chang, Lu Zhang, Dong Wang, Huchuan Lu, Bin Luo, Jun-Yan He, Jin-Peng Lan, Hanyuan Chen, Chenyang Li
To further improve the quality of tracking masks, a pretrained MR model is employed to refine the tracking results.
Ranked #5 on Semi-Supervised Video Object Segmentation on YouTube-VOS 2019 (using extra training data)
no code implementations • 6 Jun 2023 • Xinyu Gao, Zhijie Wang, Yang Feng, Lei Ma, Zhenyu Chen, Baowen Xu
Multi-Sensor Fusion (MSF) based perception systems have been the foundation in supporting many industrial applications and domains, such as self-driving cars, robotic arms, and unmanned aerial vehicles.
1 code implementation • 4 Jun 2023 • Shijie Chang, Zeqi Hao, Ben Kang, Xiaoqi Zhao, Jiawen Zhu, Zhenyu Chen, Lihe Zhang, Lu Zhang, Huchuan Lu
In this paper, we introduce 3rd place solution for PVUW2023 VSS track.
no code implementations • 22 May 2023 • Weisong Sun, Chunrong Fang, Yudu You, Yun Miao, Yi Liu, Yuekang Li, Gelei Deng, Shenghan Huang, Yuchen Chen, Quanjun Zhang, Hanwei Qian, Yang Liu, Zhenyu Chen
To support software developers in understanding and maintaining programs, various automatic code summarization techniques have been proposed to generate a concise natural language comment for a given code snippet.
no code implementations • 18 Mar 2023 • Jiayang Bai, Zhen He, Shan Yang, Jie Guo, Zhenyu Chen, Yan Zhang, Yanwen Guo
Recent methods mostly rely on convolutional neural networks (CNNs) to fill the missing contents in the warped panorama.
1 code implementation • 13 Nov 2022 • Yuan Xiao, Tongtong Bai, Mingzheng Gu, Chunrong Fang, Zhenyu Chen
The robustness of neural network classifiers is becoming important in the safety-critical domain and can be quantified by robustness verification.
2 code implementations • 24 Jun 2022 • Sirui Liu, Jun Zhang, Haotian Chu, Min Wang, Boxin Xue, Ningxi Ni, Jialiang Yu, Yuhao Xie, Zhenyu Chen, Mengyun Chen, YuAn Liu, Piya Patra, Fan Xu, Jie Chen, Zidong Wang, Lijiang Yang, Fan Yu, Lei Chen, Yi Qin Gao
We provide in addition the benchmark training procedure for SOTA protein structure prediction model on this dataset.
no code implementations • 25 Apr 2022 • Xiaodie Lin, Zhenyu Chen, Zhaohui Wei
Quantifying unknown quantum entanglement experimentally is a difficult task, but also becomes more and more necessary because of the fast development of quantum engineering.
no code implementations • 13 Feb 2022 • Jiayang Bai, Jie Guo, Chenchen Wan, Zhenyu Chen, Zhen He, Shan Yang, Piaopiao Yu, Yan Zhang, Yanwen Guo
At its core is a new lighting model (dubbed DSGLight) based on depth-augmented Spherical Gaussians (SG) and a Graph Convolutional Network (GCN) that infers the new lighting representation from a single LDR image of limited field-of-view.
1 code implementation • EMNLP 2020 • Jiawei Sheng, Shu Guo, Zhenyu Chen, Juwei Yue, Lihong Wang, Tingwen Liu, Hongbo Xu
Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i. e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries.
no code implementations • 8 Oct 2019 • Xufan Zhang, Yilin Yang, Yang Feng, Zhenyu Chen
Specifically, we asked the respondents to identify lacks and challenges in the practice of the development life cycle of DL applications.
Software Engineering
no code implementations • 10 Jun 2019 • Tianxing He, Shengcheng Yu, Ziyuan Wang, Jieqiong Li, Zhenyu Chen
Nowadays, people strive to improve the accuracy of deep learning models.
no code implementations • 9 Jun 2019 • Benlin Hu, Cheng Lei, Dong Wang, Shu Zhang, Zhenyu Chen
Deep learning models have a large number of freeparameters that need to be calculated by effective trainingof the models on a great deal of training data to improvetheir generalization performance.