1 code implementation • 8 Feb 2024 • Mingyi Zhou, Xiang Gao, Jing Wu, Kui Liu, Hailong Sun, Li Li
Our findings emphasize the need for developers to carefully consider their model deployment strategies, and use white-box methods to evaluate the vulnerability of on-device models.
1 code implementation • 13 Sep 2022 • Jing Wu, Munawar Hayat, Mingyi Zhou, Mehrtash Harandi
Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server.
1 code implementation • 22 Apr 2021 • Jing Wu, Mingyi Zhou, Ce Zhu, Yipeng Liu, Mehrtash Harandi, Li Li
Recently, adversarial attack methods have been developed to challenge the robustness of machine learning models.
1 code implementation • 5 Oct 2020 • Jiawei Liu, Huijie Fan, Qiang Wang, Wentao Li, Yandong Tang, Danbo Wang, Mingyi Zhou, Li Chen
The qualitative and quantitative experimental results show that our LLPC can improve the quality of manual labels and the accuracy of overlapping cell edge detection.
1 code implementation • 15 Sep 2020 • Jing Wu, Mingyi Zhou, Shuaicheng Liu, Yipeng Liu, Ce Zhu
A single perturbation can pose the most natural images to be misclassified by classifiers.
no code implementations • 6 May 2020 • Jing Wu, Xiang Zhang, Mingyi Zhou, Ce Zhu
Candidate object proposals generated by object detectors based on convolutional neural network (CNN) encounter easy-hard samples imbalance problem, which can affect overall performance.
no code implementations • 28 Mar 2020 • Mingyi Zhou, Jing Wu, Yipeng Liu, Xiaolin Huang, Shuaicheng Liu, Xiang Zhang, Ce Zhu
Then, the adversarial examples generated by the imitation model are utilized to fool the attacked model.
2 code implementations • CVPR 2020 • Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, Ce Zhu
In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data.