no code implementations • 23 Nov 2023 • Sikha Pentyala, Shubham Sharma, Sanjay Kariyappa, Freddy Lecue, Daniele Magazzeni
We observe that PrivRecourse can provide paths that are private and realistic.
no code implementations • 10 Jul 2023 • Sanjay Kariyappa, Leonidas Tsepenekas, Freddy Lécué, Daniele Magazzeni
While any method to compute SHAP values with uncertainty estimates (such as KernelSHAP and SamplingSHAP) can be trivially adapted to solve TkIP, doing so is highly sample inefficient.
no code implementations • 5 Jun 2023 • Trishita Tiwari, Suchin Gururangan, Chuan Guo, Weizhe Hua, Sanjay Kariyappa, Udit Gupta, Wenjie Xiong, Kiwan Maeng, Hsien-Hsin S. Lee, G. Edward Suh
In today's machine learning (ML) models, any part of the training data can affect its output.
no code implementations • 21 Sep 2022 • Kiwan Maeng, Chuan Guo, Sanjay Kariyappa, Edward Suh
Split learning and inference propose to run training/inference of a large model that is split across client devices and the cloud.
no code implementations • 12 Sep 2022 • Sanjay Kariyappa, Chuan Guo, Kiwan Maeng, Wenjie Xiong, G. Edward Suh, Moinuddin K Qureshi, Hsien-Hsin S. Lee
Federated learning (FL) aims to perform privacy-preserving machine learning on distributed data held by multiple data owners.
no code implementations • 25 Nov 2021 • Sanjay Kariyappa, Moinuddin K Qureshi
Split learning is a popular technique used for vertical federated learning (VFL), where the goal is to jointly train a model on the private input and label data held by two parties.
no code implementations • 6 Apr 2021 • Sanjay Kariyappa, Ousmane Dia, Moinuddin K Qureshi
To this end, we propose Adaptive Noise Injection (ANI), which uses a light-weight DNN on the client-side to inject noise to each input, before transmitting it to the service provider to perform inference.
no code implementations • ICLR 2021 • Sanjay Kariyappa, Atul Prakash, Moinuddin K Qureshi
EDM is made up of models that are trained to produce dissimilar predictions for OOD inputs.
1 code implementation • CVPR 2021 • Sanjay Kariyappa, Atul Prakash, Moinuddin Qureshi
The effectiveness of such attacks relies heavily on the availability of data necessary to query the target model.
1 code implementation • CVPR 2020 • Sanjay Kariyappa, Moinuddin K. Qureshi
Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access.
1 code implementation • 28 Jan 2019 • Sanjay Kariyappa, Moinuddin K. Qureshi
Deep Neural Networks are vulnerable to adversarial attacks even in settings where the attacker has no direct access to the model being attacked.