Search Results for author: Murali Annavaram

Found 24 papers, 8 papers with code

Differentially Private Next-Token Prediction of Large Language Models

no code implementations22 Mar 2024 James Flemings, Meisam Razaviyayn, Murali Annavaram

Ensuring the privacy of Large Language Models (LLMs) is becoming increasingly important.

Edge Private Graph Neural Networks with Singular Value Perturbation

no code implementations16 Mar 2024 Tingting Tang, Yue Niu, Salman Avestimehr, Murali Annavaram

Eclipse adds noise to the low-rank singular values instead of the entire graph, thereby preserving the graph privacy while still maintaining enough of the graph structure to maintain model utility.

Privacy Preserving

Ethos: Rectifying Language Models in Orthogonal Parameter Space

no code implementations13 Mar 2024 Lei Gao, Yue Niu, Tingting Tang, Salman Avestimehr, Murali Annavaram

Evaluations show Ethos is more effective in removing undesired knowledge and maintaining the overall model performance compared to current task arithmetic methods.

Memorization

Differentially Private Knowledge Distillation via Synthetic Text Generation

no code implementations1 Mar 2024 James Flemings, Murali Annavaram

However, the increasing urgency of data privacy requires LLMs to train with Differential Privacy (DP) on private data.

Knowledge Distillation Model Compression +1

Data Leakage via Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems

no code implementations12 Dec 2022 Hanieh Hashemi, Wenjie Xiong, Liu Ke, Kiwan Maeng, Murali Annavaram, G. Edward Suh, Hsien-Hsin S. Lee

This paper explores the private information that may be learned by tracking a recommendation model's sparse feature access patterns.

Recommendation Systems

MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference

no code implementations27 Sep 2022 Yongqin Wang, Rachit Rajat, Murali Annavaram

Multi-party computing (MPC) has been gaining popularity over the past years as a secure computing model, particularly for machine learning (ML) inference.

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

no code implementations30 Jun 2022 Hanieh Hashemi, Yongqin Wang, Murali Annavaram

DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators, where the TEE provides privacy and integrity verification, while accelerators perform the bulk of the linear algebraic computation to optimize the performance.

Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings

1 code implementation26 Dec 2021 Tiantian Feng, Hanieh Hashemi, Rajat Hebbar, Murali Annavaram, Shrikanth S. Narayanan

To assess the information leakage of SER systems trained using FL, we propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters, corresponding to the FedSGD and the FedAvg training algorithms, respectively.

Attribute Federated Learning +2

SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks

1 code implementation4 Jun 2021 Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram, Salman Avestimehr

This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.

BIG-bench Machine Learning Federated Learning +3

Byzantine-Robust and Privacy-Preserving Framework for FedML

no code implementations5 May 2021 Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram

This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.

Federated Learning Privacy Preserving

Privacy and Integrity Preserving Training Using Trusted Hardware

no code implementations1 May 2021 Hanieh Hashemi, Yongqin Wang, Murali Annavaram

Privacy and security-related concerns are growing as machine learning reaches diverse application domains.

BIG-bench Machine Learning

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks

1 code implementation14 Apr 2021 Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr

FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.

Federated Learning Molecular Property Prediction

Check-N-Run: A Checkpointing System for Training Deep Learning Recommendation Models

no code implementations17 Oct 2020 Assaf Eisenman, Kiran Kumar Matam, Steven Ingram, Dheevatsa Mudigere, Raghuraman Krishnamoorthi, Krishnakumar Nair, Misha Smelyanskiy, Murali Annavaram

While Check-N-Run is applicable to long running ML jobs, we focus on checkpointing recommendation models which are currently the largest ML models with Terabytes of model size.

Quantization Recommendation Systems

Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep Learning via Neural Architecture Search

1 code implementation18 Apr 2020 Chaoyang He, Murali Annavaram, Salman Avestimehr

Federated Learning (FL) has been proved to be an effective learning framework when data cannot be centralized due to privacy, communication costs, and regulatory restrictions.

Federated Learning Neural Architecture Search

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

no code implementations22 Oct 2019 Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram

Most of the model training is performed on high performance compute nodes and the training data is stored near these nodes for faster training.

Collage Inference: Using Coded Redundancy for Low Variance Distributed Image Classification

no code implementations27 Apr 2019 Krishna Giri Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram

Deploying the collage-cnn models in the cloud, we demonstrate that the 99th percentile tail latency of inference can be reduced by 1. 2x to 2x compared to replication based approaches while providing high accuracy.

Classification Cloud Computing +3

Cannot find the paper you are looking for? You can Submit a new open access paper.