no code implementations • 2 Nov 2023 • Abhijith Sharma, Phil Munz, Apurva Narayan
The number of patches in a patch attack is variable and determines the attack's potency in a specific environment.
no code implementations • 27 Jul 2023 • Abhijith Sharma, Phil Munz, Apurva Narayan
Visual AI systems are vulnerable to natural and synthetic physical corruption in the real-world.
no code implementations • 16 Jun 2022 • Abhijith Sharma, Yijun Bian, Phil Munz, Apurva Narayan
Adversarial attacks in deep learning models, especially for safety-critical systems, are gaining more and more attention in recent years, due to the lack of trust in the security and robustness of AI models.
no code implementations • 4 Jun 2022 • Abhijith Sharma, Apurva Narayan
The focus of our work is to use abstract certification to extract a subset of inputs for (hence we call it 'soft') adversarial training.
no code implementations • 7 Jan 2022 • Abhijith Sharma, Chaitanya Jugade, Shreya Yawalkar, Vaishali Patne, Deepak Ingole, Dayaram Sonawane
To overcome the bottleneck of traditional quadratic programming (QP) solvers, this paper proposes a robust penalty method (RPM) to solve an optimization problem in a linear MPC.