Search Results for author: N. Asokan

Found 31 papers, 17 papers with code

SoK: Unintended Interactions among Machine Learning Defenses and Risks

1 code implementation7 Dec 2023 Vasisht Duddu, Sebastian Szyller, N. Asokan

We survey existing literature on unintended interactions, accommodating them within our framework.

Fairness Memorization

Attesting Distributional Properties of Training Data for Machine Learning

1 code implementation18 Aug 2023 Vasisht Duddu, Anudeep Das, Nora Khayata, Hossein Yalame, Thomas Schneider, N. Asokan

The success of machine learning (ML) has been accompanied by increased concerns about its trustworthiness.

FLARE: Fingerprinting Deep Reinforcement Learning Agents using Universal Adversarial Masks

1 code implementation27 Jul 2023 Buse G. A. Tekgul, N. Asokan

We first show that it is possible to find non-transferable, universal adversarial masks, i. e., perturbations, to generate adversarial examples that can successfully transfer from a victim policy to its modified versions but not to independently trained policies.

Decision Making reinforcement-learning

GrOVe: Ownership Verification of Graph Neural Networks using Embeddings

no code implementations17 Apr 2023 Asim Waheed, Vasisht Duddu, N. Asokan

In non-graph settings, fingerprinting models, or the data used to build them, have shown to be a promising approach toward ownership verification.

Model extraction

False Claims against Model Ownership Resolution

1 code implementation13 Apr 2023 Jian Liu, Rui Zhang, Sebastian Szyller, Kui Ren, N. Asokan

Our core idea is that a malicious accuser can deviate (without detection) from the specified MOR process by finding (transferable) adversarial examples that successfully serve as evidence against independent suspect models.

On the Robustness of Dataset Inference

no code implementations24 Oct 2022 Sebastian Szyller, Rui Zhang, Jian Liu, N. Asokan

However, in a subspace of the same setting, we prove that DI suffers from high false positives (FPs) -- it can incorrectly identify an independent model trained with non-overlapping data from the same distribution as stolen.

Conflicting Interactions Among Protection Mechanisms for Machine Learning Models

1 code implementation5 Jul 2022 Sebastian Szyller, N. Asokan

We then focus on systematically analyzing pairwise interactions between protection mechanisms for one concern, model and data ownership verification, with two other classes of ML protection mechanisms: differentially private training, and robustness against model evasion.

BIG-bench Machine Learning

On the Effectiveness of Dataset Watermarking in Adversarial Settings

1 code implementation25 Feb 2022 Buse Gul Atli Tekgul, N. Asokan

We show that radioactive data can effectively survive model extraction attacks, which raises the possibility that it can be used for ML model ownership verification robust against model extraction.

Model extraction

Do Transformers know symbolic rules, and would we know if they did?

no code implementations19 Feb 2022 Tommi Gröndahl, Yujia Guo, N. Asokan

To facilitate this, we experiment on four sequence modelling tasks on the T5 Transformer in two experiment settings: zero-shot generalization, and generalization across class-specific vocabularies flipped between the training and test set.

Zero-shot Generalization

SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning

no code implementations4 Dec 2021 Vasisht Duddu, Sebastian Szyller, N. Asokan

Using ten benchmark datasets, we show that SHAPr is indeed effective in estimating susceptibility of training data records to MIAs.

BIG-bench Machine Learning Data Valuation +2

Real-time Adversarial Perturbations against Deep Reinforcement Learning Policies: Attacks and Defenses

1 code implementation16 Jun 2021 Buse G. A. Tekgul, Shelly Wang, Samuel Marchal, N. Asokan

Via an extensive evaluation using three Atari 2600 games, we show that our attacks are effective, as they fully degrade the performance of three different DRL agents (up to 100%, even when the $l_\infty$ bound on the perturbation is as small as 0. 01).

Atari Games reinforcement-learning +1

Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models

no code implementations26 Apr 2021 Sebastian Szyller, Vasisht Duddu, Tommi Gröndahl, N. Asokan

We present a framework for conducting such attacks, and show that an adversary can successfully extract functional surrogate models by querying $F_V$ using data from the same domain as the training data for $F_V$.

Generative Adversarial Network Image Classification +5

WAFFLE: Watermarking in Federated Learning

1 code implementation17 Aug 2020 Buse Gul Atli, Yuxi Xia, Samuel Marchal, N. Asokan

In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning.

Federated Learning

Extraction of Complex DNN Models: Real Threat or Boogeyman?

no code implementations11 Oct 2019 Buse Gul Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal, N. Asokan

However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API.

Model extraction

Making targeted black-box evasion attacks effective and efficient

no code implementations8 Jun 2019 Mika Juuti, Buse Gul Atli, N. Asokan

We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting.

DAWN: Dynamic Adversarial Watermarking of Neural Networks

1 code implementation3 Jun 2019 Sebastian Szyller, Buse Gul Atli, Samuel Marchal, N. Asokan

Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model.

Model extraction

Effective writing style imitation via combinatorial paraphrasing

no code implementations31 May 2019 Tommi Gröndahl, N. Asokan

Finally, we highlight a critical problem that afflicts all current style transfer techniques: the adversary can use the same technique for thwarting style transfer via adversarial training.

Style Transfer

PACStack: an Authenticated Call Stack

no code implementations24 May 2019 Hans Liljestrand, Thomas Nyman, Lachlan J. Gunn, Jan-Erik Ekberg, N. Asokan

Software shadow stacks incur high overheads or trade off security for efficiency.

Cryptography and Security

S-FaaS: Trustworthy and Accountable Function-as-a-Service using Intel SGX

1 code implementation14 Oct 2018 Fritz Alder, N. Asokan, Arseny Kurnikov, Andrew Paverd, Michael Steiner

A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations.

Cryptography and Security

All You Need is "Love": Evading Hate-speech Detection

no code implementations28 Aug 2018 Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, N. Asokan

With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem.

Hate Speech Detection

Stay On-Topic: Generating Context-specific Fake Restaurant Reviews

1 code implementation7 May 2018 Mika Juuti, Bo Sun, Tatsuya Mori, N. Asokan

Automatically generated fake restaurant reviews are a threat to online review systems.

Machine Translation NMT

PRADA: Protecting against DNN Model Stealing Attacks

2 code implementations7 May 2018 Mika Juuti, Sebastian Szyller, Samuel Marchal, N. Asokan

Access to the model can be restricted to be only via well-defined prediction APIs.

Cryptography and Security

Keys in the Clouds: Auditable Multi-device Access to Cryptographic Credentials

1 code implementation23 Apr 2018 Arseny Kurnikov, Andrew Paverd, Mohammad Mannan, N. Asokan

Personal cryptographic keys are the foundation of many secure services, but storing these keys securely is a challenge, especially if they are used from multiple devices.

Cryptography and Security

DIoT: A Self-learning System for Detecting Compromised IoT Devices

no code implementations20 Apr 2018 Thien Duc Nguyen, Samuel Marchal, Markus Miettinen, N. Asokan, Ahmad-Reza Sadeghi

Consequently, DIoT can cope with the emergence of new device types as well as new attacks.

Cryptography and Security

Towards Linux Kernel Memory Safety

no code implementations17 Oct 2017 Elena Reshetova, Hans Liljestrand, Andrew Paverd, N. Asokan

The security of billions of devices worldwide depends on the security and robustness of the mainline Linux kernel.

Cryptography and Security Operating Systems

IoT Sentinel: Automated Device-Type Identification for Security Enforcement in IoT

2 code implementations15 Nov 2016 Markus Miettinen, Samuel Marchal, Ibbad Hafeez, N. Asokan, Ahmad-Reza Sadeghi, Sasu Tarkoma

In this paper, we present IOT SENTINEL, a system capable of automatically identifying the types of devices being connected to an IoT network and enabling enforcement of rules for constraining the communications of vulnerable devices so as to minimize damage resulting from their compromise.

Cryptography and Security

C-FLAT: Control-FLow ATtestation for Embedded Systems Software

1 code implementation25 May 2016 Tigist Abera, N. Asokan, Lucas Davi, Jan-Erik Ekberg, Thomas Nyman, Andrew Paverd, Ahmad-Reza Sadeghi, Gene Tsudik

Remote attestation is a crucial security service particularly relevant to increasingly popular IoT (and other embedded) devices.

Cryptography and Security

Sensor-based Proximity Detection in the Face of Active Adversaries

no code implementations3 Nov 2015 Babins Shrestha, Nitesh Saxena, Hien Thi Thu Truong, N. Asokan

Contextual proximity detection (or, co-presence detection) is a promising approach to defend against relay attacks in many mobile authentication systems.

Cryptography and Security

Practical Attacks Against Privacy and Availability in 4G/LTE Mobile Communication Systems

1 code implementation26 Oct 2015 Altaf Shaik, Ravishankar Borgaonkar, N. Asokan, Valtteri Niemi, Jean-Pierre Seifert

We carefully analyzed LTE access network protocol specifications and uncovered several vulnerabilities.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.