1 code implementation • 17 Apr 2024 • Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for \textbf{L}arge \textbf{M}odels, called \textbf{LM}Eraser.
1 code implementation • 14 Aug 2023 • Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation.
1 code implementation • 6 Jun 2023 • Sen Peng, Yufei Chen, Cong Wang, Xiaohua Jia
This paper introduces WDM, a novel watermarking solution for diffusion models without imprinting the watermark during task generation.
1 code implementation • 16 Feb 2022 • Songlei Wang, Yifeng Zheng, Xiaohua Jia
With the proliferation of cloud computing, it is increasingly popular to deploy the services of complex and resource-intensive model training and inference in the cloud due to its prominent benefits.
no code implementations • 20 Oct 2021 • Jindi Zhang, Yifan Zhang, Kejie Lu, JianPing Wang, Kui Wu, Xiaohua Jia, Bin Liu
In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme and the results confirm the effectiveness of our detection method.
1 code implementation • 6 Aug 2021 • Jindi Zhang, Yang Lou, JianPing Wang, Kui Wu, Kejie Lu, Xiaohua Jia
In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models.