Search Results for author: Neeraj Suri

Found 5 papers, 0 papers with code

Compilation as a Defense: Enhancing DL Model Attack Robustness via Tensor Optimization

no code implementations20 Sep 2023 Stefan Trawicki, William Hackett, Lewis Birch, Neeraj Suri, Peter Garraghan

Adversarial Machine Learning (AML) is a rapidly growing field of security research, with an often overlooked area being model attacks through side-channels.

Model Leeching: An Extraction Attack Targeting LLMs

no code implementations19 Sep 2023 Lewis Birch, William Hackett, Stefan Trawicki, Neeraj Suri, Peter Garraghan

Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model.

Adversarial Attack

Specifying Autonomous System Behaviour

no code implementations20 Feb 2023 Andrew Sogokon, Burak Yuksek, Gokhan Inalhan, Neeraj Suri

Specifying the intended behaviour of autonomous systems is becoming increasingly important but is fraught with many challenges.

Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph

no code implementations1 Oct 2022 Yang Lu, Zhengxin Yu, Neeraj Suri

Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem.

Computational Efficiency Federated Learning +1

PINCH: An Adversarial Extraction Attack Framework for Deep Learning Models

no code implementations13 Sep 2022 William Hackett, Stefan Trawicki, Zhengxin Yu, Neeraj Suri, Peter Garraghan

Adversarial extraction attacks constitute an insidious threat against Deep Learning (DL) models in-which an adversary aims to steal the architecture, parameters, and hyper-parameters of a targeted DL model.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.