no code implementations • 28 Jan 2024 • Simin Chen, Xiaoning Feng, Xiaohong Han, Cong Liu, Wei Yang
In recent times, a plethora of Large Code Generation Models (LCGMs) have been proposed, showcasing significant potential in assisting developers with complex programming tasks.
1 code implementation • 12 Jan 2024 • Yufei Li, Simin Chen, Yanghong Guo, Wei Yang, Yue Dong, Cong Liu
We observe that these methods generally improve the uncertainty awareness of CodeLlama, with increased calibration quality and higher uncertainty estimation~(UE) precision.
no code implementations • 11 Jul 2023 • Simin Chen, Shiyi Wei, Cong Liu, Wei Yang
\tool tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original DNN programs during the compilation process.
1 code implementation • 1 Jun 2023 • Mirazul Haque, Rutvij Shah, Simin Chen, Berrak Şişman, Cong Liu, Wei Yang
We show that popular ASR models like Speech2Text model and Whisper model have dynamic computation based on different inputs, causing dynamic efficiency.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 20 May 2023 • Yiming Chen, Simin Chen, Zexin Li, Wei Yang, Cong Liu, Robby T. Tan, Haizhou Li
Despite much success in natural language processing (NLP), pre-trained language models typically lead to a high computational cost during inference.
1 code implementation • CVPR 2023 • Zexin Li, Bangjie Yin, Taiping Yao, Juefeng Guo, Shouhong Ding, Simin Chen, Cong Liu
A hard challenge in developing practical face recognition (FR) attacks is due to the black-box nature of the target FR model, i. e., inaccessible gradient and parameter information to attackers.
no code implementations • CVPR 2023 • Simin Chen, Hanlin Chen, Mirazul Haque, Cong Liu, Wei Yang
Recent advancements in deploying deep neural networks (DNNs) on resource-constrained devices have generated interest in input-adaptive dynamic neural networks (DyNNs).
no code implementations • 10 Oct 2022 • Simin Chen, Mirazul Haque, Cong Liu, Wei Yang
To ensure an AdNN satisfies the performance requirements of resource-constrained applications, it is essential to conduct performance testing to detect IDPBs in the AdNN.
no code implementations • 7 Oct 2022 • Simin Chen, Cong Liu, Mirazul Haque, Zihe Song, Wei Yang
Neural Machine Translation (NMT) systems have received much recent attention due to their human-level accuracy.
no code implementations • 20 May 2022 • Simin Chen, Hamed Khanpour, Cong Liu, Wei Yang
With the privatization deployment of DNNs on edge devices, the security of on-device DNNs has raised significant concern.
1 code implementation • CVPR 2022 • Simin Chen, Zihe Song, Mirazul Haque, Cong Liu, Wei Yang
To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models.
no code implementations • 29 Sep 2021 • Simin Chen, Mirazul Haque, Zihe Song, Cong Liu, Wei Yang
To further the understanding of such efficiency-oriented threats and raise the community’s concern on the efficiency robustness of NMT systems, we propose a new attack approach, TranSlowDown, to test the efficiency robustness of NMT systems.
no code implementations • 29 Sep 2021 • Mirazul Haque, Simin Chen, Wasif Arman Haque, Cong Liu, Wei Yang
Unlike the memory cost, the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers.
1 code implementation • 23 Jul 2021 • Yufei Li, Simin Chen, Wei Yang
Experiments show that program distribution shift does degrade the DL model performance to varying degrees and that existing uncertainty methods all present certain limitations in quantifying uncertainty on program dataset.
no code implementations • 1 Jan 2021 • Simin Chen, Zihe Song, Lei Ma, Cong Liu, Wei Yang
We first theoretically clarify under which condition AttackDist can provide a certified detecting performance, then show that a potential application of AttackDist is distinguishing zero-day adversarial examples without knowing the mechanisms of new attacks.