no code implementations • 23 Dec 2023 • Aishan Liu, Xinwei Zhang, Yisong Xiao, Yuguang Zhou, Siyuan Liang, Jiakai Wang, Xianglong Liu, Xiaochun Cao, DaCheng Tao
This paper aims to raise awareness of the potential threats associated with applying PVMs in practical scenarios.
no code implementations • 4 Aug 2023 • Yisong Xiao, Aishan Liu, Tianyuan Zhang, Haotong Qin, Jinyang Guo, Xianglong Liu
Quantization has emerged as an essential technique for deploying deep neural networks (DNNs) on devices with limited resources.
1 code implementation • 2 Aug 2023 • Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu
However, these defenses are now suffering problems of high inference computational overheads and unfavorable trade-offs between benign accuracy and stealing robustness, which challenges the feasibility of deployed models in practice.
no code implementations • 19 May 2023 • Yisong Xiao, Aishan Liu, Tianlin Li, Xianglong Liu
Machine learning (ML) systems have achieved remarkable performance across a wide area of applications.
no code implementations • 11 Apr 2023 • Tony Ma, Songze Li, Yisong Xiao, Shunchang Liu
The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios.
no code implementations • 11 Apr 2023 • Tianyuan Zhang, Yisong Xiao, Xiaoya Zhang, Hao Li, Lu Wang
Thus, virtual simulation experiments can provide a solution to this challenge.
no code implementations • 8 Apr 2023 • Yisong Xiao, Tianyuan Zhang, Shunchang Liu, Haotong Qin
To address this gap, we thoroughly evaluated the robustness of quantized models against various noises (adversarial attacks, natural corruptions, and systematic noises) on ImageNet.