no code implementations • 31 Jan 2024 • Benoit Baudry, Khashayar Etemadi, Sen Fang, Yogya Gamage, Yi Liu, Yuxin Liu, Martin Monperrus, Javier Ron, André Silva, Deepika Tiwari
The results show that LLMs can successfully generate realistic test data generators in a wide range of domains at all three levels of integrability.
1 code implementation • 25 Dec 2023 • André Silva, Sen Fang, Martin Monperrus
This results in RepairLLaMA producing a highly effective `program repair adapter' for fixing bugs with language models.
no code implementations • 26 Sep 2023 • Zimin Chen, Sen Fang, Martin Monperrus
Software optimization refines programs for resource efficiency while preserving functionality.
no code implementations • 14 Sep 2023 • Sizhou Chen, Songyang Gao, Sen Fang
The Transformer architecture has proven to be highly effective for Automatic Speech Recognition (ASR) tasks, becoming a foundational component for a plethora of research in the domain.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 30 Aug 2023 • Sen Fang, Chunyu Sui, Xuedong Zhang, Yapeng Tian
The field of Sign Language Production (SLP) lacked a large-scale, pre-trained model based on deep learning for continuous American Sign Language (ASL) production in the past decade.
no code implementations • 29 Jul 2023 • Sen Fang, Bowen Gao, Yangjian Wu, Teik Toe Teoh
Multimodal large models have been recognized for their advantages in various performance and downstream tasks.
no code implementations • 8 Mar 2023 • Sen Fang, Yangjian Wu, Bowen Gao, Jingwen Cai, Teik Toe Teoh
Recently, researchers have gradually realized that in some cases, the self-supervised pre-training on large-scale Internet data is better than that of high-quality/manually labeled data sets, and multimodal/large models are better than single or bimodal/small models.
1 code implementation • 19 Nov 2022 • Youwei Huang, Tao Zhang, Sen Fang, Youshuai Tan
Nowadays, security activities in smart contracts concentrate on vulnerability detection.