1 code implementation • 27 Jul 2023 • Buse G. A. Tekgul, N. Asokan
We first show that it is possible to find non-transferable, universal adversarial masks, i. e., perturbations, to generate adversarial examples that can successfully transfer from a victim policy to its modified versions but not to independently trained policies.
1 code implementation • 16 Jun 2021 • Buse G. A. Tekgul, Shelly Wang, Samuel Marchal, N. Asokan
Via an extensive evaluation using three Atari 2600 games, we show that our attacks are effective, as they fully degrade the performance of three different DRL agents (up to 100%, even when the $l_\infty$ bound on the perturbation is as small as 0. 01).