no code implementations • 30 Jan 2024 • Guangke Chen, Yedi Zhang, Fu Song, Ting Wang, Xiaoning Du, Yang Liu
To improve the imperceptibility of perturbations, we refine a psychoacoustic model-based loss with the backing track as an additional masker, a unique accompanying element for singing voices compared to ordinary speech voices.
1 code implementation • 18 Jan 2024 • Zhensu Sun, Xiaoning Du, Fu Song, Shangwen Wang, Li Li
These findings motivate our exploration of dynamic inference in code completion and inspire us to enhance it with a decision-making mechanism that stops the generation of incorrect code.
1 code implementation • 14 Sep 2023 • Guangke Chen, Yedi Zhang, Fu Song
Our attack is versatile and can work in both white-box and black-box scenarios.
1 code implementation • 28 Aug 2023 • Zhensu Sun, Xiaoning Du, Fu Song, Li Li
Even worse, the ``black-box'' nature of neural models sets a high barrier for externals to audit their training datasets, which further connives these unauthorized usages.
no code implementations • 9 Aug 2023 • Weijie Shao, Yuyang Gao, Fu Song, Sen Chen, Lingling Fan, JingZhu He
Federated learning (FL) is a distributed machine learning (ML) paradigm, allowing multiple clients to collaboratively train shared machine learning (ML) models without exposing clients' data privacy.
no code implementations • 29 Jul 2023 • Ye Tao, Wanwei Liu, Fu Song, Zhen Liang, Ji Wang, Hongxu Zhu
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
no code implementations • 23 May 2023 • Guangke Chen, Yedi Zhang, Zhe Zhao, Fu Song
Current adversarial attacks against speaker recognition systems (SRSs) require either white-box access or heavy black-box queries to the target SRS, thus still falling behind practical attacks against proprietary commercial APIs and voice-controlled devices.
1 code implementation • 10 Dec 2022 • Yedi Zhang, Zhe Zhao, Fu Song, Min Zhang, Taolue Chen, Jun Sun
Experimental results on QNNs with different quantization bits confirm the effectiveness and efficiency of our approach, e. g., two orders of magnitude faster and able to solve more verification tasks in the same time limit than the state-of-the-art methods.
1 code implementation • 6 Dec 2022 • Yedi Zhang, Fu Song, Jun Sun
In this work, we propose a quantization error bound verification method, named QEBVerif, where both weights and activation tensors are quantized.
no code implementations • 13 Sep 2022 • Zhensu Sun, Xiaoning Du, Fu Song, Shangwen Wang, Mingze Ni, Li Li
The experimental results show that the proposed estimator helps save 23. 3% of computational cost measured in floating-point operations for the code completion systems, and 80. 2% of rejected prompts lead to unhelpful completion
1 code implementation • 2 Jul 2022 • Jiaxiang Liu, Yunhan Xing, Xiaomu Shi, Fu Song, Zhiwu Xu, Zhong Ming
Our approach is orthogonal to and can be integrated with many existing verification techniques.
no code implementations • 7 Jun 2022 • Guangke Chen, Zhe Zhao, Fu Song, Sen Chen, Lingling Fan, Yang Liu
Recent work has illuminated the vulnerability of speaker recognition systems (SRSs) against adversarial attacks, raising significant security concerns in deploying SRSs.
1 code implementation • 7 Jun 2022 • Guangke Chen, Zhe Zhao, Fu Song, Sen Chen, Lingling Fan, Feng Wang, Jiashui Wang
According to the characteristic of SRSs, we present 22 diverse transformations and thoroughly evaluate them using 7 recent promising adversarial attacks (4 white-box and 3 black-box) on speaker recognition.
1 code implementation • 25 Oct 2021 • Zhensu Sun, Xiaoning Du, Fu Song, Mingze Ni, Li Li
Github Copilot, trained on billions of lines of public code, has recently become the buzzword in the computer science research and practice community.
1 code implementation • 4 Sep 2021 • Guangke Chen, Zhe Zhao, Fu Song, Sen Chen, Lingling Fan, Yang Liu
To bridge this gap, we present SEC4SR, the first platform enabling researchers to systematically and comprehensively evaluate adversarial attacks and defenses in SR. SEC4SR incorporates 4 white-box and 2 black-box attacks, 24 defenses including our novel feature-level transformations.
1 code implementation • 13 Mar 2021 • Zhe Zhao, Guangke Chen, Jingyi Wang, Yiwei Yang, Fu Song, Jun Sun
Though various defense mechanisms have been proposed to improve robustness of deep learning software, many of them are ineffective against adaptive attacks.
no code implementations • 12 Mar 2021 • Yedi Zhang, Zhe Zhao, Guangke Chen, Fu Song, Taolue Chen
Verifying and explaining the behavior of neural networks is becoming increasingly important, especially when they are deployed in safety-critical applications.
no code implementations • 16 Jul 2020 • Wenjie Wan, Zhaodi Zhang, Yiwei Zhu, Min Zhang, Fu Song
The key insight of our approach is that the robustness verification problem of DNNs can be solved by verifying sub-problems of DNNs, one per target label.
no code implementations • 15 Apr 2020 • Yusi Lei, Sen Chen, Lingling Fan, Fu Song, Yang Liu
To launch attacks in the white- and grey-box scenarios, we also propose a sample-based collision attack to gain the knowledge of the target classifier.
1 code implementation • 3 Nov 2019 • Guangke Chen, Sen Chen, Lingling Fan, Xiaoning Du, Zhe Zhao, Fu Song, Yang Liu
In this paper, we conduct the first comprehensive and systematic study of the adversarial attacks on SR systems (SRSs) to understand their security weakness in the practical blackbox setting.
1 code implementation • 19 May 2019 • Lei Bu, Yuchao Duan, Fu Song, Zhe Zhao
In this work, we first conduct a comprehensive study of existing methods and tools for crafting.
no code implementations • 27 Nov 2018 • Yedi Zhang, Fu Song, Taolue Chen
Alternating-time temporal logics (ATL/ATL*) represent a family of modal logics for reasoning about agents' strategic abilities in multiagent systems (MAS).