Search Results for author: Xiaohua Jia

Found 6 papers, 5 papers with code

LMEraser: Large Model Unlearning through Adaptive Prompt Tuning

1 code implementation17 Apr 2024 Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for \textbf{L}arge \textbf{M}odels, called \textbf{LM}Eraser.

Machine Unlearning

Machine Unlearning: Solutions and Challenges

1 code implementation14 Aug 2023 Jie Xu, Zihan Wu, Cong Wang, Xiaohua Jia

Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation.

Machine Unlearning

Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process

1 code implementation6 Jun 2023 Sen Peng, Yufei Chen, Cong Wang, Xiaohua Jia

This paper introduces WDM, a novel watermarking solution for diffusion models without imprinting the watermark during task generation.

SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service

1 code implementation16 Feb 2022 Songlei Wang, Yifeng Zheng, Xiaohua Jia

With the proliferation of cloud computing, it is increasingly popular to deploy the services of complex and resource-intensive model training and inference in the cloud due to its prominent benefits.

Cloud Computing Privacy Preserving

Detecting and Identifying Optical Signal Attacks on Autonomous Driving Systems

no code implementations20 Oct 2021 Jindi Zhang, Yifan Zhang, Kejie Lu, JianPing Wang, Kui Wu, Xiaohua Jia, Bin Liu

In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme and the results confirm the effectiveness of our detection method.

Autonomous Driving object-detection +1

Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

1 code implementation6 Aug 2021 Jindi Zhang, Yang Lou, JianPing Wang, Kui Wu, Kejie Lu, Xiaohua Jia

In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models.

3D Object Detection Autonomous Driving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.